Comment on EU AI Act

In April 2023, Avaaz requested expert insights from the Integrity Institute on aspects of the draft EU AI Act to inform its related advocacy efforts. Through conversations with Avaaz about where policymakers’ understanding about the implications and associated challenges of AI use could be deepened, select Institute members addressed three main topics in their comments in April:

  1. Examples of real life harms that have resulted from the use of AI systems to help policymakers understand the stakes;

  2. The categories of AI systems classified by the draft Act as “high risk”; and

  3. A draft methodology for auditing AI systems.

A draft of the Act has since passed through key committees, and the European Parliament is expected to vote on it in June. 

Note: The Integrity Institute facilitates conversations between external organizations and its membership in order to make member expertise available to the public. The Institute and its members did not receive any compensation from Avaaz for providing expert insights on the draft EU AI Act, and this summary of member comments does not constitute endorsement for Avaaz’s related advocacy efforts.


Summary of Member Comments on EU AI Act

Examples of real-life harm resulting from AI systems

  • Harms from AI systems that recommend content on social media platforms: When AI-powered content recommender systems are allowed to use past user behavior to predict future user behavior, these systems can learn to target people with susceptibility to addictive behaviors or harmful content, for example:

    • Targeting apps with gambling-like systems and behaviors to people predisposed to gambling addiction

    • Targeting people with eating disorders and body dysmorphia extreme dieting and weight loss content

    • Self-harm content

    • Dangerous misinformation

    • Recommendations to join hate groups or conspiracy groups.

  • Harm to democracy: Similar to above, AI systems that rank content in newsfeeds, make recommendations, and distribute ads can be gamed in the service of misinformation campaigns. For example, in the 2016 US election Russian actors tapped into vulnerabilities in the AI systems of Facebook and Twitter to achieve large scale and targeted distribution of misinformation that arguably impacted the election. Cambridge Analytica was also used in the 2016 US campaign and Brexit to leverage AI systems on Facebook and target voters in a way that impacted behavior. 

  • COVID deaths: Similarly misinformation about COVID -19 that was likely responsible for hundreds of thousands of deaths, spread through social media AI ranking and recommendations systems.  

    •  Powerful actors weaponize the way social media allows mass behavior manipulation through these AI systems. 

  • Misinformation risks associated with generative AI: Generative AI built off of an LLM has certain unexpected risks associated with it (e.g. “Sydney” from Chat GPT). Because of this, there is a risk of real world harm resulting from AI-generated answers to certain questions that users may pose such as “what is the deep state”, “are vaccines a conspiracy”, etc. Someone whose beliefs are reinforced by AI generated content without the media literacy to understand that it may not be factual could take drastic action and harm others similar to the Pizzagate conspiracy. While there are some safeguards that could be put in place related to misinformation, the rapidly evolving nature of misinfo / conspiracy theories makes this difficult to police. We might want to consider how often training data is refreshed to help mitigate this risk.

  • Discrimination in employment, job applications: A job application system discriminated against minorities, people above 40, and people with disabilities [source

  • Facial recognition systems used by law enforcement incorrectly identifying people & leading to arrest (often with racial bias): There have been several examples of facial recognition systems getting it wrong and leading to arrest of innocent people, particularly in cases of black men. A good summary of recent examples is provided in this Wired article.

  • The CSET AI Incident Database is a resource with other incidents of harms.

On categories of high risk AI systems in Annex III

The draft Act lays out classification rules for so-called “high risk” AI systems, and lists systems in Annex III that may be considered high risk “if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons.” When reviewing the draft text, II members identified the following areas as missing from the current list in Annex III:

  • Any AI system that does following should be considered a candidate for high risk

    • Uses historical behavior of people

    • Predicts future actions of people

    • This would include user engagement focused content recommendation systems tracking everything a user has engaged with, in order to predict the probability of engagement with content in the future.

  • Behavioral manipulation: This is linked to the harms that have been caused by spread of misinformation through social media by gaming AI systems that control content ranking and recommendations (see points in section 2 below about harm to democracy and behavioral manipulation). 

    • Related to the above point about AI systems that rank & recommend content using historical data of people or predicts future actions of people: any AI systems used for the ranking and recommending of content relating to high-risk topics such as elections, health/medical information, terrorism, racism/discrimination, suicide/self harm, etc.

    • Similarly, chatbots or generative AI built off of language models can produce text and answers to questions on high risk topics, which can lead to harms (see below)

    • Political campaign technologies that could be used to produce profiles of voters and persuade them are not accounted for in the current list.

  • Any biological/chemical/medical/scientific settings where AI could be used to create weapons or empower bad actors to produce weapons, including biological weapons, chemical weapons, nuclear arms, environmental damage (i.e.,  deliberate alterations to ecosystems/climate)

  • AI systems used by military or private security organizations (as is it just covers Law Enforcement and Justice institutions 

  • “AI that makes other AI” should be considered high risk

  • Scale of AI in and of itself is a legitimate dimension to assess risk of the AI system

Further comments on the designation of “high risk” and how it is conceptualized and applied were offered as well:

  • Safety is rightfully considered in Annex III, but it is not sufficient. In paragraph 2, the phrase “AI systems intended to be used as safety components” in the management and operation of critical infrastructure systems is limiting, for example, and should be broader to cover other significant risks. 

  • The EU should be very cautious about sweeping general purpose AI (GPAI) into this regulation, including generative AI. The focus of the regulation has been, rightly, on particular high risk uses.

  • Individual developers of general purpose technology can’t practically delimit all of the different uses, nor fulfill all the same sorts of compliance obligations. This is particularly true for open source. What’s more, it can and should be the responsibility of the high risk user to find a developer/provider that has done the relevant conformity assessments, QA, and more; there will be natural market incentives for some GPAI providers who want to be considered for high risk uses to do this extra diligence, while others opt not to. 

  • To the extent GPAI is covered, the Act should focus on developers in an ongoing relationship with users making relevant information available, and proportionate requirements for documentation to downstream users where there is no ongoing relationship. The Act should consider an exclusion for open source here where code is not provided as part of an ongoing, commercial relationship and product.

  • Principled approaches to think about the failures will lead to a better understanding of harms. There is recent work on compiling a list of AI failures and its implications and putting this into the context. It needs to be understood that many failures are not clearly detectable to non-domain experts and the outputs of a conditional text generation model should not be used without so-called guardrails and adequate verification, to influence any decision-making activities that may impact human lives in a consequential manner.

Auditing AI Systems for Bias

Institute members were also given the opportunity to provide feedback on a draft framework for auditing AI systems for bias, which is a component of the AI Act’s requirements for systems designated as “high risk.”

Overall, member comments reflected the view that audits of AI systems should take a broader approach that allows for more fundamental questions to be asked beyond whether a system is biased – including, for example, if the AI system should exist at all. Even for audits limited to assessing bias, the traditional approach employing model cards would not be sufficient. A more system-wide approach would be needed to truly understand if the AI is exhibiting bias.

Requirements for AI audits should include transparency requirements for auditors and users (or those impacted by an AI system). For example, audit requirements should stipulate that auditors have access to raw data and can reproduce findings from model/system developers, rather than just trusting the developers to provide conclusions about their systems. Audits could include red teaming exercises to stress test high risk systems in various scenarios (testing for bias, among other impacts). Any audit should include disclosure of how data collected from people is used and how the system makes predictions of what people will do. This is particularly cogent for AI systems that are used to recommend content to people (e.g., on social platforms), and such data disclosure should give transparency into how the system responds to harmful content, which can reveal any relationship between the prevalence of harmful content and the system design. Similarly, audits should contribute to transparency and explainability of the AI system and its use, e.g., demonstrate how individuals affected by an AI can see and understand how such decisions were made.

Finally, other suggestions from II members addressed the additional required documentation to be shared in an AI audit. This should include documentation of safety procedures (aka “break glass measures”) that would be used in an emergency, and documentation of exactly who in an organization has access to data and the model itself, who can take it offline (aka “kill switch” access), what backup system goes into place if it does go offline. 

Previous
Previous

Comment to PCAST on Generative AI

Next
Next

Unleashing the Potential of Generative AI in Integrity, Trust & Safety Work: Opportunities, Challenges, and Solutions