Today, AI experts and scientists led by Amba Kak and Dr. Sarah Myers West (AI Now Institute), Dr. Alex Hanna and Dr. Timnit Gebru (Distributed AI Research Institute), Maximilian Gahntz (Mozilla Foundation), Dr. Zeerak Talat (Independent researcher), and Dr. Mehtab Khan (Yale ISP) released a policy brief arguing that “general purpose artificial intelligence” (GPAI) carry serious risks and must not be exempt under the forthcoming EU AI Act. They were joined by over 50 institutional and individual signatories.

The brief offers guidance for EU regulators as they prepare to set the regulatory tone for addressing AI harms in the Act. It argues the following:

  1. GPAI is an expansive category. For the EU AI Act to be future proof, it must apply across a spectrum of technologies, rather than be narrowly scoped to chatbots/large language models (LLMs). The definition used in the Council of the EU’s general approach for trilogue negotiations provides a good model.
  2. GPAI models carry inherent risks and have caused demonstrated and wide-ranging harms. While these risks can be carried over to a wide range of downstream actors and applications, they cannot be effectively mitigated at the application layer. 
  3. GPAI must be regulated throughout the product cycle, not just at the application layer, in order to account for the range of stakeholders involved. The original development stage is crucial, and the companies developing these models must be accountable for the data they use and design choices they make. Without regulation at the development layer, the current structure of the AI supply chain effectively enables actors developing these models to profit from a distant downstream application while evading any corresponding responsibility.
  4. Developers of GPAI should not be able to relinquish responsibility using a standard legal disclaimer. Such an approach creates a dangerous loophole that lets original developers of GPAI (often well-resourced large companies) off the hook, instead placing sole responsibility with downstream actors that lack the resources, access, and ability to mitigate all risks. 

Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants. Standardized documentation practice and other approaches to evaluate GPAI models, specifically generative AI models, across many kinds of harm are an active area of research. Regulation should avoid endorsing narrow methods of evaluation and scrutiny to prevent this from resulting in a superficial checkbox exercise.

You can read the full brief and signatories below: