I. Establishing a Regulatory Perimeter

FDA regulation for pharmaceuticals is triggered by the “marketing” of a drug as a critical gate to entry. In other industries, there are gates around the sale of certain products, which may be preferable over marketing given First Amendment concerns (see Section III below). Any attempt at sector-specific AI regulation will run into a thorny set of definitional questions: What constitutes the AI market, and how do products enter into commercial use? 

In the absence of a regulatory mandate that requires AI companies to come forward and declare themselves, the contours of the “AI market” are vaguely defined: Is every company that develops an AI system internally part of the market and thus open to scrutiny? If so, then companies like Walmart are AI companies. By contrast, many companies do not develop AI systems themselves, but procure and deploy systems developed by others, potentially fine-tuning models or making other adjustments in the process.

A second question related to perimeter concerns scale and impact: an approach taken in the Executive Order on AI and in the White House Voluntary Commitments relies on a scale threshold—systems trained on computational power measured at 1026 floating-point operations per second (FLOPS)—to carve out systems for scrutiny, based on the presumption that particular harms are associated with especially large systems.1This scale hypothesis is articulated in Markus Anderljung et al., “Frontier AI Regulation: Managing Emerging Risks to Public Safety,” arXiv, November 7, 2023, https://doi.org/10.48550/arXiv.2307.03718; and previously by Lennart Heim in “The Case for Pre-Emptive Authorizations for AI Training,” June 10, 2023, https://blog.heim.xyz/the-case-for-pre-emptive-authorizations. However, this presumption deserves closer scrutiny: the effect of this approach is to exclude all systems currently in operation from such reporting requirements, as this threshold just exceeds the largest currently active model. Furthermore, this approach overlooks the fact that the risks associated with AI depend closely on the contexts within which those systems are deployed.2Heidy Khlaaf, “Toward Comprehensive Risk Assessments and Assurance of AI-Based Systems,” Trail of Bits, March 7, 2023, https://www.trailofbits.com/documents/Toward_comprehensive_risk_assessments.pdf.

This is not the only method for scoping AI systems for scrutiny. Other approaches that surfaced throughout the expert convening include examining systems based on a set of risk classifications, similar to the approach taken under the EU’s AI Act (see Appendix 1 for a list of other proposals incorporating risk classifications); structuring the market based on commercialization of an AI product; or adopting a supply-chain approach that identifies different sets of actors involved in different phases of AI development and adopting governance mechanisms tailored to the specifics of their roles.3See Matt Davies and Michael Birtwistle, “Seizing the ‘AI Moment’: Making a Success of the AI Safety Summit,” Ada Lovelace Institute, September 7, 2023, https://www.adalovelaceinstitute.org/blog/ai-safety-summit; and Elliot Jones, “Foundation Models in the Public Sector,” Ada Lovelace Institute, accessed May 10, 2024, https://www.adalovelaceinstitute.org/project/foundation-models-gpai/.

A third and final consideration relates to how a new agency would interact with existing agencies. AI systems have use cases across the economy (public, private, and defense); therefore, existing agencies have the authority to study and regulate specific applications of AI systems. A brief summary of some of those authorities is outlined in Appendix 1 below. For example, the FTC has clarified the ways their existing unfair and deceptive acts authority pertains to AI systems; the FDA has recently issued guidance for AI in medical devices; and both the Trump and Biden administrations have asked agencies to explore ways to leverage AI systems to fulfill their mandates. The creation of a new agency will require the boundaries of the agency’s jurisdiction to be defined. In the case of AI, this will likely mean that projects and responsibilities within existing agencies may need to be transferred, or that interagency collaboration will need to be established. 

II. Enabling Robust Enforcement

Over the past two decades, artificial intelligence development has proceeded with comparatively little regulatory scrutiny, and many firms have amassed such sufficient size and scale that the penalties of enforcement agencies like the FTC amount to little more than a budget line. Determining how to create incentive structures and sufficient regulatory friction to incentivize compliance remains a difficult regulatory design problem.

Premarket approval authority. The FDA model hinges on the FDA’s ability to prevent pharmaceutical companies from marketing drugs to physicians—without which they cannot sell their drugs on the market. Controlling this essential gate to market entry is what grants the FDA a big stick, critical to its effectiveness as a regulator. At present, the analogous gates to market entry in AI are haphazard (for example, adoption of cloud services that hold federal data must go through the FedRAMP certification process, but such measures do not extend to the sector at large. OMB is also considering fast tracking certain forms of AI through the FedRAMP certification process through the Emerging Technology Prioritization Framework4FedRAMP, “FedRAMP’s Emerging Technology Prioritization Framework,” January 26, 2024, https://www.fedramp.gov/assets/resources/documents/FedRAMP_DRAFT_Emerging_Technology_Prioritization_Framework.pdf.).

To have teeth, any regulatory intervention targeting the AI sector must be able to meaningfully challenge some of the biggest companies in the world. In addition to the FDA’s recall and debarment powers (described in Section 1 above) there are a number of powers common to federal agencies that FDA-style interventions might draw on, including but not limited to the following:

Rulemaking. Legislation is often structured around high-level mandates and principles, and technical details are left to agency rulemaking. Most agencies undertake rulemaking following the Administrative Procedure Act (APA), which mandates a series of notice and comment periods on draft rules and judiciary review.5Congressional Research Service, “Judicial Review under the Administrative Procedure Act (APA), December 8, 2020, https://crsreports.congress.gov/product/pdf/LSB/LSB10558. Agency rules are legally binding. Rulemaking is used across agencies (most notably the FDA) to create standards for testing, reporting and audits, incident reporting and monitoring, bright-line rules regarding product use, and programs to monitor supply chains. Flexible measures, including non-binding voluntary guidance and policy statements, may also prove important mechanisms given the dynamism of the field.

Federal Advisory Councils. The Federal Advisory Committee Act enables agencies to grant authority to “create advisory committees when nonfederal input is beneficial for decision-making.”6Congressional Research Service, “Federal Advisory Committee Act (FACA): CommitteeEstablishment and Termination,” October 19, 2023 https://crsreports.congress.gov/product/pdf/IF/IF12102. This may be important to the creation of committees to assist with standards, recommendations on rules and guidance, or other input from communities most at risk from use of AI systems. 

Investigation authority. Federal agencies have varying levels of authority related to investigations, ranging from inquiring about illegal activity to more specific authorities aimed at general information gathering. US enforcement agencies such as the FTC and DOJ have subpoena authority and ability to bring “civil investigation demands” (CIDs). Both subpoenas and CIDs may be used to obtain existing documents or oral testimony. Agencies can also be given investigative powers to conduct studies. For example, Section 6(b) of the FTC Act enables the Commission to conduct wide-ranging studies that do not have a specific law enforcement purpose. These types of investigations give the FTC access to nonpublic information to understand an industry. 

The final category of investigative authority that a new agency might require involves the capacity to conduct interagency investigations. This entails the authority to exchange confidential information with other relevant enforcement agencies within specified limitations and confidentiality assurances. This facilitates collaboration between the FTC and other law enforcement entities, reducing the risk of redundant investigations.

Debarment. The FDA has the authority to prohibit specific individuals or corporations from engaging in FDA-regulated activities (essentially ending their career) based on illegal conduct (e.g., a clinical investigator who falsifies records). Debarment can be permanent or for a set period of time. The FDA maintains a public list of debarred entities.7Food and Drug Administration, “FDA Debarment List (Drug Product Applications),” updated June 13, 2024, https://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/compliance-actions-and-activities/fda-debarment-list-drug-product-applications.

Import Alerts. The FDA has the authority to issue import alerts or the ability to “refuse admission” to the US market if products “appear, from sample or otherwise” (“otherwise” may include a history of violations or a failed facility inspection) to violate the Federal Food, Drug, and Cosmetic Act (FD&C Act).8Office of the Commissioner, “Federal Food, Drug, and Cosmetic Act (FD&C Act),” Section 801(a), Food and Drug Administration, November 3, 2018, https://www.fda.gov/regulatory-information/laws-enforced-fda/federal-food-drug-and-cosmetic-act-fdc-act. The product is put on an import alert list to notify border officials that products should be automatically detained. Once products are detained, the owner can testify that the products are safe / abide by FDA regulations and the FDA can follow up with a determination to permit or refuse entry.9Congressional Research Service, “Enforcement of the Food, Drug, and Cosmetic Act: Select Legal Issues,” updated February 9, 2018, https://crsreports.congress.gov/product/pdf/R/R43609.

A number of legal challenges are particularly likely to come up in any conversation about the establishment of a regulatory agency devoted to artificial intelligence. While we kept those bracketed from the scope of our main conversation, we outline some of them here:

1. First Amendment and Content Moderation 

Content moderation is the term used to describe the decisions, processes, and practices that online platforms put in place regarding the treatment of “user-generated content” they host or amplify. While content moderation is often associated with social media, the overlap with AI should not be ignored. Developers of AI systems including large language models (LLMs) may be considered to be engaged in activity adjacent to content moderation, for example by making decisions regarding what data to exclude from training sets and what user prompts to block or provide fixed answers to. Developers may also choose to directly moderate the output of generative models to comport with their own platform and usage policies, attempt to address certain safety and other concerns, or minimize brand risk.

As regulators discuss analyzing/overseeing these decisions, many of the challenges that come with making objective claims of what is “safe” or “accurate” content similarly plague social media regulation. Additionally, social media platforms use AI systems to curate content in a user’s feed (e.g., Facebook’s Feed, TikTok’s For You, X’s For You) and flag or help detect content that is banned in a platform’s terms of service. Because these activities are generally understood to be editorial in nature, lawmakers’ attempts to hold platforms accountable for activities that resemble content moderation could face First Amendment challenges. 

NetChoice is challenging a Texas law that prohibits social media platforms from removing or labeling user posts and requires social media companies to disclose information about how they moderate and curate user content on constitutional grounds.10NetChoice v. Paxton, Knight First Amendment Institute at Columbia University, accessed November 26, 2023, http://knightcolumbia.tierradev.com/cases/netchoice-llc-v-paxton. At present, the Supreme Court remanded the case back to the lower courts for reconsideration. If NetChoice were to win outright, the case could create a precedent that defines government-mandated disclosures regarding online platform decisions as unconstitutional, limiting the AI policy community’s ability to mandate disclosures. Groups such as Knight First Amendment Institute are arguing that while portions of the Texas law violate the First Amendment, the law’s provisions requiring platform disclosures should be evaluated under the legal framework set out in the Supreme Court’s Zauderer decision, which applies deferential scrutiny to laws compelling companies to disclose factual and uncontroversial information about their services.11Ibid. If this line of argument prevails, mandated transparency reports (and impact assessments) for platforms could remain constitutional.

2. Section 230

Section 230 of the Communications Decency Act provides a liability shield for interactive computer services (ICS) defined as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server, including specifically a service or system that provides access to the Internet and such systems operated or services offered by libraries or educational institutions.”12Congressional Research Service, “Section 230: An Overview,” updated January 4, 2024, https://crsreports.congress.gov/product/pdf/R/R46751. The law specifies that no provider or user of ICS shall be treated as the publisher or speaker of any information provided by another information content provider and should not be liable for the decision to take voluntary action to restrict objectionable material. 

Many AI systems may be considered interactive computer services. To the extent that an AI system is engaged in the dissemination of user-generated content, these systems could be shielded from liability for harm. Thus far, US law seems to point toward recognizing AI systems used within social media platforms as protected by Section 230.13Gonzalez v. Google, 598 U.S. 617, May 18, 2023, https://www.supremecourt.gov/opinions/22pdf/21-1333_6j7a.pdf. Generative AI systems such as ChatGPT do not intermediate third-party content but create content, and some legal scholars believe that tools like ChatGPT will not be protected by Section 230.14Hasala Ariyaratne, “ChatGPT and Intermediary Liability: Why Section 230 Does Not and Should Not Protect Generative Algorithms,” SSRN, May 16, 2023, https://ssrn.com/abstract=4422583.

3. US Courts and Rulemaking 

As described above in Section II on enabling robust enforcement, rulemaking under the Administrative Procedure Act (APA) and other substantive laws is considered crucial for the sort of smart, effective, iterative regulation necessary for a quickly evolving administrative agency. This requires an administrative agency to be empowered to make decisions in a way that limits challenges or reversals to the extent possible. This may be difficult, given the current trend of the Supreme Court seeking to limit the powers of administrative agencies.

On June 28, 2024, the Supreme Court overturned the long-standing principle of “Chevron deference,” derived from the famous Chevron case,15Chevron U.S.A., Inc. v. NRDC, 467 U.S. 837, June 25, 1984, https://tile.loc.gov/storage-services/service/ll/usrep/usrep46 7/usrep467837/usrep467837.pdf. which required courts to defer to reasonable agency interpretation of ambiguous statutory provisions. For several decades, this has formed the cornerstone of agency authority. Chevron deference made it unlikely that a pharmaceutical company would challenge, for instance, the FDA’s regulation of new products and devices, clinical trials, and premarket approvals as discussed in Section 2, since a court of law would be likely to defer to FDA’s interpretation of the relevant provisions of law. Now, pursuant to the 2024 cases Loper Bright Enterprises v. Raimondo and Relentless Inc. et al. v. Department of Commerce16603 U.S. ___ (2024), June 28, 2024, https://www.supremecourt.gov/opinions/23pdf/22-451_7m58.pdf. (known simply as “Loper Bright”), a reviewing court “need not and under the APA may not defer to an agency interpretation of the law simply because a statute is ambiguous.”17Ibid., p. 35. This means that when an agency interprets a statutory provision with inherent ambiguity (such as, for example, whether the FDA can regulate laboratory-developed tests as “devices”18“FDA Takes Action Aimed at Helping to Ensure the Safety and Effectiveness of Laboratory Developed Tests,” press release, Food and Drug Administration, April 29, 2024, https://www.fda.gov/news-events/press-announcements/fda-takes-action-aimed-helping-ensure-safety-and-effectiveness-laboratory-developed-tests.), the agency’s decision could be challenged before a court of law, which will then undertake a de novo assessment of whether the agency correctly interpreted the law. Chief Justice John Roberts categorically empowered courts to take up this role, stating that “agencies have no special competence in resolving statutory ambiguities. Courts do.”19603 U.S. ___ (2024), p. 23.

Loper Bright continues a trend of undermining agency authority that follows the establishment of the “major questions doctrine” in West Virginia v. EPA20West Virginia v. EPA, 597 U.S. 697, June 30, 2022, https://www.supremecourt.gov/opinions/21pdf/20-1530_n758.pdf. in 2022. There, the court stated that when a statute raises a matter of vast “economic and political significance” (i.e., a major question), then it needs to be resolved by the judiciary rather than the administrative agency, unless the agency exercise of power is supported by clear Congressional delegation.

Loper Bright does not allow the courts to overstep agency decisions in every instance. Chief Justice Roberts observes that some statutes “expressly delegate” to an agency the authority to interpret a particular statutory term. Others empower an agency to prescribe rules to “fill up the details” of a statutory scheme. The court would have to defer to such delegation by the legislature.

What does this mean for AI governance? Of course, reducing ambiguities in law is the necessary first step, since the issue of challenging agency authority arises when there is an identifiable ambiguity that the agency has stepped in to address. However, a certain degree of ambiguity is inherent in the regulation of emerging technologies, since a statute governing them will necessarily need to be flexible enough to keep pace with rapidly changing technology. Thus, while setting up a new agency, legislatures should anticipate questions to be raised under Loper Bright as well as under West Virginia v. EPA, and offer the clear congressional delegation of authority that the Supreme Court has wanted to see in both cases. To minimize litigation risk and solidify the authority of a new agency, the parent statute should expressly empower it to interpret any ambiguities in the law, and resolve any major questions raised within the law.

This is not foolproof, because a court could still undertake an assessment of the agency’s power under the nondelegation doctrine—that is, it could assess whether the scope of delegated authority crosses into the impermissible territory of allowing an executive body to exercise essential legislative functions. It is also noteworthy that the court has recently limited powers of other agencies such as SEC and EPA, creating an expectation that the powers of the administrative state will continue to be curtailed. It is important to track these developments and design a law that plans around the new standards set by these cases.

Spotlight

The Fight to Reclaim Technical Expertise Amid the Fall of Chevron Deference

IV. Preventing Industry Capture

High on the list of concerns about forming any novel agency charged with enforcement of an industry is the risk that commercial interests might overwhelm the regulatory authority and independence of the agency. This is particularly pressing in the context of artificial intelligence, in which the leading firms hold considerable economic and political power and a track record of increasingly assertive lobbying.21AI Now Institute, “Tech and Financial Capital” AI Now Institute (blog), April 11, 2023, https://ainowinstitute.org/spotlight/tech-and-financial-cap.

A prime example is in how the FDA is funded. According to one estimate from 2022, 65 percent of the FDA’s work is funded through user fees that are paid for by applicant firms.22Demasi, “From FDA to MHRA: Are Drug Regulators for Hire?” This takes the shape of a negotiated fee for a five-year period. The FDA negotiates with the industry it regulates for how the fees will be used: for example, when there is a medical reviewer who will be paid through the user fees of a particular applicant, that applicant will in turn receive regular reporting on how its fees were used and whether deadlines were met. This makes the FDA responsible to the companies it is reviewing for its accounting—this significantly weakens the agency’s power and risks creating leverage by industry. The FDA also provides reports to public stakeholders, but there is a significant disparity between the frequency of its meetings with industry and with the public (this latter category also includes trade associations, which are typically nonprofit). 

Agencies struggle to avoid becoming “captured” or overly friendly toward industry for a few reasons: 

Industry lobbying. Industry players have more resources than consumer advocates to hire government affairs staff devoted to tracking the drafting of agency rules and guidance and serving on committees. This imbalance, paired with companies’ ability to fund political campaigns, leads to a situation in which companies can bend text in their favor through direct engagement with agencies, or via meetings with friendly lawmakers who can write letters of support and make calls to agencies during rulemaking processes.

Revolving doors. Regulatory agencies require expert staff, which often requires hiring people who have worked in the industry being regulated. When people switch jobs, they often maintain relationships with former colleagues. Additionally, agency jobs do not pay as well as industry jobs, which creates an incentive for staff at agencies to assuage industry representatives in hopes of being recruited for jobs in the future. For example, former FDA Commissioner Dr. Scott Gottlieb joined Pfizer’s board of directors within four months of announcing his resignation. As a commissioner, Gottlieb rolled out the Biosimilars Action Plan to promote the development of follow-on versions of biologic products. Pfizer happens to be a leading maker of biosimilars.23Karas, “FDA’s Revolving Door.” When agencies attempt to close the “revolving door,” it can result in an inability to hire top talent, especially in quickly evolving technical fields.24Ibid. 

Consultants. In industries that rely heavily on outside evaluations/audits, the consulting industry (e.g., McKinsey, Accenture, Deloitte, EY) often becomes involved, including in direct work for expert agencies. These firms frequently conduct projects on behalf of the regulator while simultaneously working for industry players.25Committee on Oversight and Reform, The Firm and the FDA: McKinsey & Company’s Conflicts of Interest at the Heart of the Opioid Epidemic, Interim Majority Staff Report, U.S. House of Representatives, April 13, 2022, https://oversightdemocrats.house.gov/sites/evo-subsites/democrats-oversight.house.gov/files/2022-04-13.McKinsey%20Opioid%20Conflicts%20Majority%20Staff%20Report%20FINAL.pdf. 

Funding dependencies. The funding model for regulatory agencies matters tremendously for its effectiveness. If a regulator is dependent on funding from industry, this can inadvertently make the regulator beholden to industry motives. 

In the AI context, an additional risk emerges due to the unique structure of the sector:

Infrastructure conflict of interest. Evaluating AI systems will likely require access to cloud computing infrastructure in order to run audits or provision a sandbox for external audits. The major providers of this infrastructure in the US (AWS, Azure, Google) offer products that themselves require agency oversight.

Frequent counterarguments made in response to concerns about regulatory capture are that any industry must be engaged in the regulatory conversation given that companies are closest to the ground and know the inner workings of their products. In many industries, however, regulators take a much more overtly adversarial posture expressly to ensure that the interests of the public and the economy at large are adequately protected against corporate malfeasance. A better frame might proceed from the position that any discussion about regulatory design must attend to the eventual likelihood of industry influence and must ensure, through its structure and accountability mechanisms, that industry does not get to set the agenda where it is involved in governance processes.