The United States is in a unique moment for AI policy: a groundswell of public interest in artificial intelligence has captured policymaker attention, at a moment when the White House has adopted a historically unique willingness to adopt a more muscular posture toward the tech industry. At the same time, Congress has largely been hamstrung by political squabbling: in an election year, keeping the lights on, let alone passing complex and capital-intensive legislation, remains a hard task. This makes it an opportune moment to step back and directionally align on the best approach to core AI governance questions, orienting ourselves around the world of the possible rather than making incremental tradeoffs in service of the practicable. 

Creating a novel AI agency whose primary responsibility would be to regulate the actors responsible for developing and deploying AI systems, either to supplant or to augment existing enforcement mechanisms distributed across the US government, is one of many possible paths forward.1For a summary of such mechanisms, see Appendix 1. The current moment has led to the reinvigoration of what is an old idea: since 2017, a range of stakeholders have proposed such an agency, frequently referencing the construct of an “FDA for AI” alongside proposals for “nutritional labels” for AI systems and other similar approaches to licensing and certification modeled on approaches to food and drug safety (see Box 1).

The FDA is the regulatory agency responsible for ensuring the safety, efficacy, and security of the nation’s food, biological, and medical products through a rigorous product-oversight and premarket-approval regime. Such an approach offers the promise of greater regulatory friction that would ensure the burden is on companies to adequately vet their systems for efficacy and potential harm—before, rather than after, public release.2See Accountable Tech, AI Now Institute, and EPIC, “Zero Trust AI Governance,” AI Now Institute (blog), August 10, 2023, https://ainowinstitute.org/publication/zero-trust-ai-governance; and Gianclaudio Malgieri and Frank Pasquale, “From Transparency to Justification: Toward Ex Ante Accountability for AI,” Brooklyn Law School, Legal Studies Paper No. 712, Brussels Privacy Hub Working Paper, No. 33, May 3, 2022, https://doi.org/10.2139/ssrn.4099657. On the other hand, this style of regulatory oversight could enable “check-box certification,” distracting from existing enforcement capabilities and creating opportunities for regulatory capture. If an independent agency is indeed the right path forward for AI regulation, circumventing these challenges—and setting a strong administrative foundation for accountability—would be key.

Given the growing momentum building around models for AI governance, this is a pressing moment to be asking: Is an “FDA for AI” a good idea? And more specifically: By looking deeply into this example, can we become more concrete about what specific regulatory authorities are needed to effectively govern AI?

Who is talking about an “FDA for AI”?


Andrew Tutt, “An FDA for Algorithms”

“The products the FDA regulates, and particularly the complex pharmaceutical drugs it vets for safety and efficacy, are similar to black-box algorithms. And the crises the FDA has confronted throughout its more than one hundred years in existence are comparable to the kinds of crises one can easily imagine occurring because of dangerous algorithms. The FDA has faced steep resistance at every stage, but its capacity to respond to, and prevent, major health crises has resulted in the agency becoming a fixture of the American institutional landscape. We could draw on the FDA’s history for lessons, and use those lessons as an opportunity to avoid repeating that history.”


Olaf J. Groth, Mark J. Nitzberg, and Stuart J. Russell, “AI Algorithms Need FDA-Style Drug Trials”

“To protect the cognitive autonomy of individuals and the political health of society at large, we need to make the function and application of algorithms transparent, and the FDA provides a useful model. An oversight body would need to carry the authority of a government agency like the FDA, but also employ the depth of technical know-how found at existing technology-focused governing bodies like ICANN. It would need to house a rich diversity of expertise to grasp the breadth of society, seating psychologists and sociologists alongside programmers and economists. Because not every piece of code needs tight oversight, it would need distinct trigger points on when to review and at what level of scrutiny, similar to the ways the FDA’s powers stretch or recede for pharmaceuticals versus nutritional supplements.”


Inioluwa Deborah Raji et al., “Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance”

 “Given the limitations of the current AI audit landscape, we now proceed to examine what we can learn from audit systems in other domains…Some of these audit schemes already intersect with AI products—the National Transportation Safety Board (NTSB) has provided third party analysis of self-driving car crashes; the Food and Drug Administration (FDA) is already approving AI enabled medical devices; and many current internal audit templates derive from documentation requirements in these other industries. Some AI products, in that sense, are already subject to incumbent audit policies.


Sam Altman: “Senate Judiciary Subcommittee Hearing on Oversight of AI

Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards. Number two, I would create a set of safety standards focused on what you said in your third hypothesis as the dangerous capability evaluations. One example that we’ve used in the past is looking to see if a model can self-replicate and sell the exfiltrate into the wild. We can give your office a long other list of the things that we think are important there, but specific tests that a model has to pass before it can be deployed into the world. And then third I would require independent audits. So not just from the company or the agency, but experts who can say the model is or is not in compliance with these stated safety thresholds and these percentages of performance on question X or Y”


Markus Anderljung et al., “Frontier AI Regulation: Managing Emerging Risks to Public Safety”

“A more anticipatory, preventative approach to ensuring compliance is to require a governmental license to widely deploy a frontier AI model, and potentially to develop it as well. Licensure and similar “permissioning” requirements are common in safety-critical and other high-risk industries, such as air travel, power generation, drug manufacturing, and banking. U.S. public opinion polling has also looked at the issue. A January 2022 poll found 52 percent support for a regulator providing pre-approval of certain AI systems, akin to the FDA


Adam Thierer and Neil Chilson, “The Problem with AI Licensing & an ‘FDA for Algorithms’”

Interest in artificial intelligence (AI) and its regulation has exploded at all levels of government, and now some policymakers are floating the idea of licensing powerful AI systems and perhaps creating a new “FDA for algorithms,” complete with a pre-market approval regime for new AI applications. Other proposals are on the table, including transparency mandates requiring government-approved AI impact statements or audits, “nutrition labels” for algorithmic applications, expanded liability for AI developers, and perhaps even a new global regulatory body to oversee AI development.

It’s a dangerous regulatory recipe for technological stagnation that threatens to derail America’s ability to be a leader in the Computational Revolution and build on the success the nation has enjoyed in the digital economy over the past quarter century.


Connor Dunlop and Merlin Stein, “Safe before Sale: Learnings from the FDA’s model of life sciences oversight for foundation models”

The FDA has processes, regulatory powers and a culture that helps to identify and mitigate risks across the development and deployment process, from pre-design through to post-market monitoring. This holistic approach provides lessons for the AI regulatory ecosystem.”