After holding a rapid deliberation late last year that convened deep experts on the FDA alongside key participants in the AI policy debate, we pulled together several immediate insights into a memo. Though it was clear the path forward isn’t to port over any single regulatory model wholesale, hosting a deep dive into the Food and Drug Administration’s approach to governing the pharmaceutical and medical devices sectors provided a helpfully concrete basis for considering paths forward for AI governance.

Our goal is to share out actionable takeaways from the deep dive that can serve as the basis for ongoing conversation. We plan to follow this with a deeper and more nuanced treatment of these issues and the open questions the deliberation provoked through a report that will be published later this spring. In the interim, we welcome engagement and questions from others thinking through these issues.

You can download a PDF version of this post here.

10 key insights from a rapid expert deliberation on an ‘FDA for AI’

  1. An ‘FDA for AI’ is a blunt metaphor to build from. A more productive starting point would look at FDA-style regulatory interventions and how they may be targeted at different points in the AI supply chain.
  2. FDA-style interventions might be better suited for certain parts of the AI supply chain than others.
  3. The FDA model offers a power lesson in optimizing regulatory design for information production, rather than just product safety. This is urgently needed for AI given lack of clarity on market participants and structural opacity in AI development and deployment.
  4. The lack of consensus on what counts as efficacy (rather than safety) is a powerful entry point for regulating AI. There will always be potential harms from AI; the regulatory question thus must consider whether the benefits outweigh the harms. But to know that, we need clear evidence – which we currently lack – of the specific benefits offered by AI technologies.
  5. Pre-market approval is potentially the most powerful stage of regulatory intervention: this is where alignment between regulatory power and companies’ incentives to comply reach their peak.
  6. In both the context of the FDA and in AI, assuring downstream compliance after a product enters the market is a regulatory challenge. Post-market surveillance is a challenge for AI given the varied provenance of AI system components, but currently characterizes the bulk of ongoing AI regulatory enforcement.
  7. To have teeth, any regulatory intervention targeting the AI sector must go far beyond the current standard of penalties to meaningfully challenge some of the biggest companies in the world.
  8. Greater transparency into what constitutes the market itself, and the process through which AI products are sold, will be important to AI governance. Currently the contours of what constitutes the ‘AI market’ are underspecified and opaque.
  9. The funding model for regulatory agencies matters tremendously to its effectiveness, and can inadvertently make the regulator beholden to industry motives.
  10. FDA-style documentation requirements for AI would already be a step-change from the current accountability vacuum in AI. Encouraging stronger monitoring and compliance activities within AI firms like record-keeping and documentation practices would generate organizational reflexivity as well as provide legal hooks for ex-post enforcement.

Overview:

This memo outlines highlights from a rapid deliberation by a group of experts who combine decades of experience studying the FDA, the pharmaceutical industry, and artificial intelligence. The group convened former government officials, academic researchers, medical doctors, lawyers, computer scientists and journalists from a variety of countries for a collective deep dive into lessons from FDA-style regulation and their potential application to the domain of artificial intelligence. A more detailed report of the outcomes of this discussion will be forthcoming, however this memo details a set of actionable takeaways that the conversation surfaced. 

Here are key insights drawn from that conversation:

An ‘FDA for AI’ is a blunt metaphor to build from. A more productive starting point would look at FDA-style regulatory interventions and how they may be targeted at different points in the AI supply chain:

FDA-style interventions might be better suited for certain parts of the AI supply chain than others:

The FDA model offers a power lesson in optimizing regulatory design for information production, rather than just product safety. This is urgently needed for AI given lack of clarity on market participants and structural opacity in AI development and deployment.

The lack of consensus on what counts as efficacy (rather than safety) is a powerful entry point for regulating AI. There will always be potential harms from AI; the regulatory question thus must consider whether the benefits outweigh the harms. But to know that, we need clear evidence – which we currently lack – of the specific benefits offered by AI technologies.

Pre-market approval is potentially the most powerful stage of regulatory intervention: this is where alignment between regulatory power and companies’ incentives to comply reach their peak.

In both the context of the FDA and in AI, assuring downstream compliance after a product enters the market is a regulatory challenge. Post-market surveillance is a challenge for AI given the varied provenance of AI system components, but currently characterizes the bulk of ongoing AI regulatory enforcement.

To have teeth, any regulatory intervention targeting the AI sector must go far beyond the current standard of penalties to meaningfully challenge some of the biggest companies in the world.

Greater transparency into what constitutes the market itself, and the process through which AI products are sold, will be important to AI governance. Currently the contours of what constitutes the ‘AI market’ are underspecified and opaque.

The funding model for regulatory agencies matters tremendously to its effectiveness, and can inadvertently make the regulator beholden to industry motives.

FDA-style documentation requirements for AI would already be a step-change from the current accountability vacuum in AI. Encouraging stronger monitoring and compliance activities within AI firms like record-keeping and documentation practices would generate organizational reflexivity as well as provide legal hooks for ex-post enforcement.

Further Reading:

The following publications enriched our conversation, and we’d recommend them as generative starting points for those who want to go further.


We’re grateful to those who participated in deliberation on these issues. While this memo offers highlights, the group did not always arrive at consensus, and individual findings have not been, and should not be, attributed to any specific individual. Participants in the conversation included: Julia Angwin, Miranda Bogen, Julie Cohen, Cynthia Conti-Cook, Matt Davies, Caitriona Fitzgerald, Ellen Goodman, Amba Kak, Vidushi Marda, Varoon Mathur, Deb Raji, Reshma Ramachandran, Joe Ross, Sandra Wachter, Sarah Myers West, and Meredith Whittaker.

We’re particularly grateful to our advisory council members: Hannah Bloch-Wehba, Amy Kapczynski, Heidy Khlaaf, Chris Morten, and Frank Pasquale, and our Visiting Policy Fellow Anna Lenhart.

The deliberation was facilitated by Alix Dunn and Computer Says Maybe, with support by Alejandro Calcaño.

Research Areas