International Expert Consortium on AI

International Expert Consortium on AI

Technology, Information and Internet

International Expert Consortium for the Regulation, Economics, and Computer Science of AI

About us

International Expert Consortium on Regulation, Computer Science, and Economics of AI.

Industry
Technology, Information and Internet
Company size
11-50 employees
Type
Nonprofit

Employees at International Expert Consortium on AI

Updates

  • Please join us tomorrow, on May 28, online!

    📢 Join us Tomorrow for an Exciting Online Talk on Content Moderation! 📢   🗓️ Date: May 28, 2024; 🕓 Time: 4:15 – 5:45 p.m. CEST 💡 Topic: Automated decisions on online platforms: the interplay between the GDPR, the DSA, and the AI Act   🎙️ Speaker: Domingos Soares Farinho 📚 Event: ENS Research Seminar   🔍 Focus:    - Interaction between the DSA and the AI Act    - Algorithmic content moderation    - Collaborative co-regulation   Everyone is welcome to join!   🔗 Link: https://lnkd.in/eiwzBJ6m   Looking forward to seeing you there!   #AI #DSA #AIAct #AlgorithmicModeration #ENSResearchSeminar #gdpr

    Join our Cloud HD Video Meeting

    europa-uni-de.zoom.us

  • My thoughts on the GDPR vs. OpenAI

    Large Language Lies and the Law: some thoughts on #noyb vs OpenAI on false LLM output.   Some key legal observations:   1) GDPR vs privacy law Concerning fake news about individuals, one might claim that personality rights, guaranteed by EU Member State tort law, are the right legal venue, not the GDPR. The German Constitutional Court has made remarks in this direction in its "Recht auf Vergessen I" case (on the right to be forgotten) in 2019. However, this would be wrong IMO. The GDPR fully applies to fake news concerning personality aspects of data subjects and, if anything, takes primacy vis-à-vis Member State tort law. For a deep dive on this, see my habitation, open access here, pages 525 ff.: https://lnkd.in/eVGWz_Qm   2) Inspiration from personality rights Personality rights can be an inspiration, though. In 2013, the German Federal Court (BGH) decided the famous Autocomplete case. It held that Google is responsible for libelous autocompletes, for example those suggesting that the wife of the former German president was once a prostitute. However, no duty to proactively filter autocompletes. Rather, companies need to set up a notice and action mechanism, correcting output once put on notice (cf. DSA) This can be adapted to LLMs: autocomplete models are small lang. models. User expectations concerning the autocomplete's accuracy were decisive for the BGH. Despite disclaimers, such expectations may be said to exist with LLMs, too. Hence, to the very least, personality tort law suggests that OpenAI etc. need to implement an effective notice and action mechanism. More discussion in our paper: https://lnkd.in/edDP2hpp See also the excellent, long analysis by Prof Sandra Wachter et al.: https://lnkd.in/eTQswCMK   3) GDPR accuracy principle Art. 5(1)(d) GDPR enshrines the accuracy principle. Personal data must be correct and up-to-date. However, this principle is not without limits (cf. Rec. 39). IMO, it must be weighed against countervailing fundamental rights, such as those enjoyed by the LLM providers. Hence, I would argue for a de minimis threshold. The result would be basically the same as under personality tort law. Important false information violates the principle; trivial information perhaps does not (e.g., if ChatGPT gets my birthday wrong, unless that is important in the setting).   4) Remedies a) Fines and damages, also for immaterial harm b) Users can potentially demand correction; difficult questions concerning the right to erasure wrt the information encoded in the model c) In serious cases, users may enjoin companies from offering the model until the problem has been fixed   This is why this case is so powerful, in my view: data protection and copyrights are the cases that can "break" an LLM as they can lead to injunctions and, potentially, model deletion.   Comments welcome, legal history in the making!

  • Giving input at the German Parliament - always a special occasion!

    🌐 Great Panel on International AI Regulation, Hallucinations, and Sustainability at German Parliament (Deutscher Bundestag) last Friday 📸 Some of my students asked to capture the spirit of the event, so here are some fun pictures we took! 🎤 I had the pleasure of sharing the stage with Anna Christmann, a member of the German Parliament and of the UN High-Level Advisory Body on AI, alongside my fantastic colleague Prof Sandra Wachter. 🔍 Key Takeaways from Our Discussion: 1️⃣ Mapping AI Regulation: The United Nations High-Level Advisory Board has nicely outlined the principles and potential functionalities of an international AI regulation framework in its interim report (https://lnkd.in/eQEFdVSq). 2️⃣ Core Functionalities, in my view: - Gathering and publishing peer-reviewed critical information, especially on AI safety. - Providing a platform for streamlining global enforcement of AI regulations. - Working towards a Global AI Compact. 3️⃣ Global Digital Compact Review: We discussed the zero draft of the Global Digital Compact: https://lnkd.in/eyQ2bYeG. It's a good start, but there is much room for improvement concerning binding measures, standards, and sustainability, to name just a few. 4️⃣ Sustainability as a Blind Spot: Despite significant implications in terms of energy and water usage and toxic materials, sustainable AI regulation is in its infancy, and neglected by current frameworks, including the #aiact (https://lnkd.in/eH5W7Mj5). The international AI scene can facilitate interdisciplinary research, promote standards, and explore regulatory options. 5️⃣ From Principles to Implementation in International AI Regulation: To move from vague principles to concrete actions, we should rely on globally recognized standards that are currently being drafted. Each country could opt to join a Global AI Compact with reference to these standards, creating a safe harbor for AI companies operating in their jurisdictions. We're already seeing a global race for safe harbors; hence, this will be crucial lever. 🗨️ I welcome your comments and thoughts on these points! cc: European New School of Digital Studies Europa-Universität Viadrina Frankfurt (Oder) #AIRegulation #SustainableAI #GlobalStandards #AIConference #aiact

    • No alternative text description for this image
    • No alternative text description for this image
  • Today!

    Excited about my participation in today's event hosted by Berkeley and the European Law Institute for a "Transatlantic Conversation on Consumer Protection Law". Our focus will be on AI, liability, copyright, and their ramifications for consumer protection. 📅 Date: Wednesday, April 3, 8:00 a.m. PDT / 17:00 CEST 📍 Location: Zoom (details provided below) I will be presenting insights on the following topics: 1) Direct Consumer Protections Through AI Prohibitions? 2) Indirect Consumer Benefits of the AI Act 3) Indirect Consumer Harms of the AI Act 4) The Challenge of Innovation 🔗 Join us for the session: https://lnkd.in/edXaPVgt Meeting ID: 983 7865 4539 Passcode: 769128 Information: https://lnkd.in/eVsbcViR The event will open with introductions by by Ted Mer (Berkeley) and nd Christoph Bu (ELI/ U Osnabrück), followed by contributions from myself and great Berkeley colleagues es Colleen Ch and nd Pamela Samuel. The remainder of the session will be devoted to questions from the audience. See you online! #aiact #consumerlaw

  • Finally out!

    📢 New Paper Alert! 🌍 Sustainable AI Regulation   I am really thrilled about the publication of my latest article, in the Common Market Law Review. This piece cuts into the often-neglected intersection of environmental sustainability and technology regulation. Think water consumption, GHG emission, toxic materials - particularly, but not only, in GenAI contexts.   🔍 The piece seeks to address a critical gap in current AI regulatory discourse by focusing on the environmental impacts of AI technologies, including their potential for mitigation, but also contribution to climate change and significant water usage. With an eye on large generative models like ChatGPT, GPT-4, and Gemini, I argue for the integration of sustainability considerations into technology regulation. 📚 In existing law, I delve into: ·       Environmental law, like the EU Emissions Trading Regime and the Water Framework Directive, which do not easily apply to AI, if at all; ·       A sustainable interpretation of the GDPR; this includes a suggestion that individual rights, such as the GDPR’s right to erasure, must be balanced with collective environmental interests, and may, counterintuitively, be limited as a result; ·       A critique of the final version of the AI Act, which takes many small steps into the right direction, mostly in terms of transparency and (potentially) risk manamement, but fails to comprehensively tackle this issue ⏩ Going forward, I propose a suite of policy measures that not only align AI regulation with environmental sustainability but also offer a regulatory blueprint for other high-impact technologies like blockchain and the metaverse: 1)    Co-Regulation (e.g., potentially binding codes of practice) 2)    Sustainability by design (e.g., sustainability impact assessments) 3)    Restrictions on AI training (e.g., binding quota for electricity sourced from renewables) 4)    Consumption caps: potentially including AI processes and data centers under the Emissions Trading Regime. Today, it only covers “old-school” high-emissions sectors (steel plants; aviation; transport). Why not the novel ones? 💡 The piece’s goal is to reflect on a comprehensive framework capable of steering the dual transformations of digitization and climate change mitigation towards a more sustainable future.   For those interested in reading more:   🔗 Journal Link: https://lnkd.in/eVxC2Hr4    📄 open access draft version: https://lnkd.in/eH5W7Mj5 Many thanks for excellent comments to Lilian Edwards, Nora Ni Loideain, the participants in the fabulous #PLSC 2023 conference in Boulder, and the anonymous peer reviewers!   #AIRegulation #Sustainableai #aiact #EnvironmentalLaw #EUGreenDeal #DigitalTransformation #ClimateChange #AI #OpenAccess #climatechangeai

    Article: Sustainable AI Regulation - Common Market Law Review

    Article: Sustainable AI Regulation - Common Market Law Review

    kluwerlawonline.com

  • Great event today, also online!

    I'm excited to host a terrific event today on AI, genetic data, and surveillance. The panel discussion is on honor of Yves Moreau, the latest Einstein Foundation Award recipient, for his dedication to ethical standards in DNA data use and privacy in AI. 🕒 Time: March 14, 2-4 PM CET 📍 Place: Robert Koch Forum, Berlin; or online 🔗 Register for in-person attendance: https://lnkd.in/eu3iGgRX Livestream: https://lnkd.in/eZKKaz_g   Yves's commitment sets the stage for an exciting dialogue with our four distinguished speakers: – Susanne Schreiber (Einstein Prof of Theoretical Neurophysiology at HU Berlin; Vice Chair of the German Ethics Council) – Helena Mihaljevic (Prof of Data Science, HTW Berlin) – Vince Madai (Research Lead of the Responsible Algorithms team at the QUEST Center for Responsible Research at Charité) The panel discussion on "The Pitfalls of Bad Practices in Genetic Big Data and AI" will delve into the ethical standards necessary for handling personal data in AI, highlighting the urgent need for responsible research practices. We will touch on crucial points for the future of privacy in our societies, and aim to pinpoint strategies to harness the vast potential of AI-driven analysis in healthcare while avoiding undue surveillance and political pressures. Yves has done an amazing job in pushing the boundaries in this field, being also a very vocal researcher calling out numerous high-profile studies that have worked with genetic data obtained in obscure and illegitimate ways, particularly from vulnerable populations. Registration for both livestream and in-person attendance is still open. Do not miss this opportunity to contribute to a pivotal conversation.   For more specific information, please refer to the event's page: https://lnkd.in/eu3iGgRX   #EthicsInAI #GeneticData #BigData #AIForGood #DataPrivacy #EinsteinFoundation #Scientificquality #AIResearch #DigitalEthics #PrivacyPreservingAI cc: European New School of Digital Studies Einstein Foundation Berlin

    The Pitfalls of Bad Practices in Genetic Big Data and AI

    The Pitfalls of Bad Practices in Genetic Big Data and AI

    einsteinfoundation.de

  • Good interview today on the radio:

    🚀 Exciting News from the EU Parliament!   Today marks a monumental step in the governance of Artificial Intelligence within the European Union. The EU Parliament has just officially accepted the first comprehensive AI Act in the western world, a landmark decision that promises to shape the future of AI development and deployment across Europe, and potentially beyond. As I and many others have often said, it's not perfect (no law really is), but much better than nothing. https://lnkd.in/dQbrq4K7   I had the honor of sharing some insights on the AI Act in a conversation with German broadcasting station NDR 🎙️.   Our discussion covered several critical areas:      🧠 Foundation models: The backbone of AI's future, and their regulation.    🔒 Biometric systems: Navigating the challenges, particularly against the backdrop of democratic backsliding.    🔄 Adaptability: The key to the AI Act's evolution: watch out for delegated and implementing acts; technical standards; revisions; guidelines by the AI Office and Board; and the operationalization of (deliberately) broad concepts in concrete cases and adjudications.    🚀 Development and need for deployment in the EU, where more AI is crucial: Mitigating the EU's skilled labor shortage, tackling understaffing in hospitals, and navigating complex geopolitical scenarios.   Dive deeper into our conversation here: https://lnkd.in/dgDkCafC   #AIAct #EUParliament #AIInnovation #FutureIsNow

    Weekly agenda | News | European Parliament

    Weekly agenda | News | European Parliament

    europarl.europa.eu

  • New blog post on our paper out!

    📣 **New Blog Post Alert: Fairness & Bias in Algorithmic Hiring** 📣 - now doubly important due to the #aiact A few weeks ago, our team, consisting of computer scientists, legal scholars, and philosophers, released a paper on the intricate balance of fairness and bias in algorithmic hiring processes. If you are not in for the long haul of our 50-page analysis, here is an excellent blog post written by two of our authors, , Alessandro Fabr and d Matthew J. Denn. Insights from Our Study: I. Algorithmic Hiring's Double-Edged Sword: While AI in recruitment promises to eliminate traditional biases, it also risks introducing or exacerbating discrimination, particularly against socio-economically disadvantaged groups and historically marginalized individuals. II. The Bias Sources: We've identified over twenty sources of bias, categorized into institutional biases, individual preference patterns, and technological blind spots 1. Institutional Biases: These are systemic issues within the hiring ecosystem, such as job segregation and elitism, where certain groups are historically underrepresented or favored in specific roles or industries. 2. Individual Preferences: Often unconscious, these biases include generalized patterns of preference, like favoring candidates with uninterrupted career trajectories, which can disadvantage individuals who have taken career breaks for family or personal reasons. 3. Technology Blindspots: These arise from the limitations of the algorithms themselves and the data they are trained on. Examples include optimization algorithms for ad delivery that inadvertently exclude certain demographics or ableist technology that fails to accommodate candidates with disabilities. III. Fairness Flavours: Our study explores various dimensions of fairness, from outcome and impact fairness to accuracy, process, and representational fairness. IV. Mitigating Bias: Addressing these biases requires a multifaceted approach, focusing on both the technology and the processes surrounding its use: 1. Proxy Reduction 2. Diverse Data Sets 3. Regular Audits 4. Transparency and Accountability 5. Ethical Guidelines V. The Data Dilemma: Challenges include a lack of diversity in datasets, a focus on early hiring stages, and missing data on sensitive attributes such as disability, religion, and sexual orientation. VI. Video Analysis Pitfalls: Our analysis raises concerns about the fairness of algorithms trained on video interviews, particularly for protected groups. 🧩 Beyond Algorithmic Fairness: Our work underscores the complexity of deploying AI in hiring responsibly. It's not just about fairness metrics but also about understanding algorithms' broader impacts within societal and organizational contexts. We advocate for comprehensive strategies that consider human-machine interactions, dedicated data curation, and the validity of hiring tools. Blog post: https://lnkd.in/ewj5VWpc #AlgorithmicFairness #EthicalAI #AIinHR #ai

    Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

    https://montrealethics.ai

  • live in Berlin on Monday!

    ***Update: There will be a livestream!*** 🚀 Join us at the legendary Berlin AI Campus for an electrifying event on AI Regulation! 📅 Date: Feb 26, 2024, 6 pm CET 📍 Location: Merantix AI Campus Dive deep into the "Finalized EU AI Act" with the one and only Rasmus Rothe and myself on stage. Unpack the Act's impact, explore compliance strategies, and navigate through the EU's new regulatory framework. Perfect for business leaders, legal gurus, and AI enthusiasts eager to stay ahead. Would love to see you there! 🔗 Discover more and register: https://lnkd.in/eHwVnHW9 Link to the livestream on this page: https://lnkd.in/enB78xTx cc: European New School of Digital Studies #AIACT #AIRegulation #Compliance #LegalTech #BusinessLeadership #Innovation #AI #BerlinEvents #TechnologyEvents #AICompliance

    | AI Campus Berlin

    | AI Campus Berlin

    merantix-aicampus.com

  • Excellent day today at the German Federal Ministry of Justice

    Bringing the German Federal Ministry of Justice Up to Speed on the AI Act 🚀 Exciting times: Today, the German Federal Ministry of Justice (Bundesministerium der Justiz), which co-led the German position on the AI Act, organized an entire AI Day for its members. Great idea and neat event, courtesy of the Ministry's very own Innovation HUB. A heartfelt appreciation goes to my old friend and fellow lawyer, Bernadette Kell, and her team for masterminding this enlightening session. 🙏 Featuring distinguished speakers like the Federal Minister of Justice 🎤 and representatives from Aleph Alpha 🤖 and KIRA Center for AI Risks & Impacts 📚. In my view, it is crucial to bring agencies, ministries, and other public actors up to speed as quickly as possible with respect to AI literacy. This gathering not only emphasized the need for technological and regulatory awareness but also reinforced our collective commitment to jointly navigating the complexities of responsible AI, between academia, industry, and the public sector. 💡👩⚖️ #AIAct #DigitalTransformation #ResponsibleAI #PublicSectorInnovation

    • No alternative text description for this image
    • No alternative text description for this image

Similar pages