As mentioned above, AI systems are used throughout society and already fall within the purview of existing laws / agency jurisdiction. The FTC, the Consumer Financial Protection Bureau (CFPB), the U.S. Department of Justice (DOJ), and the U.S. Equal Employment Opportunity Commission (EEOC) all enforce existing discrimination laws in housing, employment, financial services, and so on.1 Rohit Chopra et al., “Joint Statement on Enforcement Efforts against Discrimination And Bias in Automated Systems,” Federal Trade Commission, April 25, 2023, https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf. What follows is a brief summary of the ways AI is currently regulated under existing enforcement structures.
The Federal Trade Commission has broad authority to protect consumers under Section 5(a) of the FTC Act, which “prohibits unfair or deceptive acts or practices in or affecting commerce.” Recently the commission has warned companies that they are accountable for misleading claims related to AI.2Michael Atleson, “Keep Your AI Claims in Check,” Federal Trade Commission, February 27, 2023, https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check. In an article on its blog, the FTC cautions companies to “consider at the design stage and thereafter the reasonably foreseeable . . . ways it could be misused for fraud or cause other harm. Then ask . . . whether such risks are high enough that you shouldn’t offer the product at all.”3Michael Atleson, “Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale,” Federal Trade Commission, March 20, 2023, https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale. The FTC has brought actions against businesses that sold or distributed potentially harmful technology when the business had not taken reasonable measures to prevent injury to consumers.4Ibid. The FTC has also forced companies to delete algorithms when they are trained with data that was collected illegally.5Kate Kaye, “FTC Case against Weight Watchers Means Death for Algorithms,” Protocol, March 14, 2022, https://web.archive.org/web/20240114131137/https://www.protocol.com/policy/ftc-algorithm-destroy-data-privacy.
The Department of Justice’s Civil Rights Division enforces constitutional provisions and federal law prohibiting discrimination across many facets of society, including in education, the criminal justice system, employment, housing, lending, and voting. In June 2022, the DOJ settled its lawsuit against Meta. The complaint alleged that Meta developed algorithms that enabled advertisers to target their housing ads based on protected characteristics under the Fair Housing Act. As part of the settlement, Meta is required to develop a new ad-delivery algorithm that addresses “racial and other disparities.”6Office of Public Affairs, “Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising,” press release, U.S. Department of Justice, June 21, 2022, https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-known.
The Equal Employment Opportunity Commission enforces federal laws that make it illegal for an employer to discriminate against an applicant or employee due to a person’s race, color, religion, sex, national origin, age, disability, or genetic information. In May 2023, the agency released a technical assistance document that focused on averting discrimination against job seekers and existing workers.7U.S. Equal Employment Opportunity Commission, “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” May 18, 2023, https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial.
The Consumer Financial Protection Bureau oversees financial products. In May 2022, the CFPB cautioned that if credit decision technology is “too complex, opaque, or novel” to explain adverse credit decisions, companies cannot use the complexity as a defense against Equal Credit Opportunity Act violations.8Consumer Financial Protection Bureau, “CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms,” May 26, 2022, https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms. In August 2022, the CFPB issued an interpretive rule stating that when digital marketers engage in identifying prospective customers, or in placing content to influence consumer behavior, they are generally considered service providers under the Consumer Financial Protection Act. If their actions violate federal consumer financial protection laws—for example, if companies employ an algorithm for targeted marketing—they can be held responsible.9Consumer Financial Protection Bureau, “CFPB Warns that Digital Marketing Providers Must Comply with Federal Consumer Finance Protections,” August 10, 2022, https://www.consumerfinance.gov/about-us/newsroom/cfpb-warns-that-digital-marketing-providers-must-comply-with-federal-consumer-finance-protections.
The Food and Drug Administration regulates some AI systems through its Software as a Medical Device (SaMD) process. Software products undergo a different review process, which classes devices based on associated health risks and imposes increasingly stringent requirements that range from labeling and registration at the low end to premarket approval and clinical studies at the high end. Given the dynamic nature of AI systems that continue “learning” after approval is granted, the change control plan process is particularly salient to how the FDA regulates AI and is subject to ongoing debate.10See Center for Devices and Radiological Health, “Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices,” Food and Drug Administration, April 22, 2024, https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices; Center for Devices and Radiological Health, “Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles,” Food and Drug Administration, October 24, 2023, https://www.fda.gov/medical-devices/software-medical-device-samd/predetermined-change-control-plans-machine-learning-enabled-medical-devices-guiding-principles; and Stein and Dunlop, Safe before Sale. Numerous proposals for how the SaMD process could be adapted for AI have been put forward. See for example Eric Wu et al., “How Medical AI Devices Are Evaluated: Limitations and Recommendations from an Analysis of FDA Approvals,” Nature Medicine 27, no. 4 (April 2021): 582–84, https://doi.org/10.1038/s41591-021-01312-x; Stan Benjamens, Pranavsingh Dhunnoo, and Bertalan Meskó, “The State of Artificial Intelligence-Based FDA-Approved Medical Devices and Algorithms: An Online Database,” Npj Digital Medicine 3, no. 1 (September 11, 2020): 1–8, https://doi.org/10.1038/s41746-020-00324-0; and Phoebe Clark, Jayne Kim, and Yindalon Aphinyanaphongs, “Marketing and US Food and Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical Devices: A Systematic Review,” JAMA Network Open 6, no. 7 (July 5, 2023): e2321792, https://doi.org/10.1001/jamanetworkopen.2023.21792.
Software vendors have traditionally been shielded from liability for harm to end users through warranty disclaimers and contractual limitations of liability; however, this may be shifting.11Jey Kumarasamy and Brenda Leong, “Third-Party Liability and Product Liability for AI Systems,” International Association of Privacy Professionals (IAPP), July 26, 2023, https://iapp.org/news/a/third-party-liability-and-product-liability-for-ai-systems. The discrimination laws described above, which are increasingly being used to hold algorithms accountable, are one example of this shift.
Similarly, as part of the litigation following a large-scale Marriott data breach, a US district judge found that Marriott’s information technology service provider had a duty of care to Marriott’s customers to prevent a data breach.12Marriott v. Maryland, October 27, 2020, https://www.govinfo.gov/content/pkg/USCOURTS-mdd-8_19-md-02879/pdf/USCOURTS-mdd-8_19-md-02879-8.pdf. Additionally, product liability law is governed by states and may vary in its applicability to AI systems. (New York has a “failure to warn” category, for example, whereas other states don’t.)
The National Institute for Standards and Technology (NIST) is part of the U.S. Department of Commerce and is responsible for “creating critical measurement solutions and promoting equitable standards.”13“About,” National Institute of Standards and Technology (NIST), July 10, 2009; updated January 11, 2022, https://www.nist.gov/about-nist. NIST works toward the development of internet and cybersecurity standards with international standards bodies and stakeholders.14“AI Standards Development Activities with Federal Involvement,” NIST, August 10, 2020; updated May 2, 2024, https://www.nist.gov/standardsgov/ai-standards-development-activities-federal-involvement. In response to Executive Order 13859, NIST issued a “a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.”15“A Plan for Federal Engagement in Developing AI Technical Standards and Related Tools in Response to Executive Order (EO 13859),” NIST, August 10, 2019; updated April 5, 2022 https://www.nist.gov/artificial-intelligence/plan-federal-engagement-developing-ai-technical-standards-and-related-tools. Later, Congress passed Division E of the National Defense Authorization Act for Fiscal Year 2021; section 5301 directed NIST to create the AI Risk Management Framework (RMF).16William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, H.R. 6395, 116th Cong. (2019–2020), https://www.congress.gov/bill/116th-congress/house-bill/6395/text. Version 1 of the RMF was published in January 2023 and has since been referenced in executive orders and legislative proposals.17“AI Risk Management Framework” accessed July 19, 2024, NIST, https://www.nist.gov/itl/ai-risk-management-framework. In November 2023, NIST launched the U.S. AI Safety Institute to evaluate known and emerging risks of foundation models.18Paul Sandle and David Shepardson, “US to Launch Its Own AI Safety Institute,” Reuters, November 1, 2023, https://www.reuters.com/technology/us-launch-its-own-ai-safety-institute-raimondo-2023-11-01. As standards come into place, legislators in Congress or in state legislatures can reference these standards, moving them from a voluntary compliance tool to a mandate.19See for example Federal Artificial Intelligence Risk Management Act of 2023, S. 3205, 118th Cong. (2023–2024), https://www.congress.gov/bill/118th-congress/senate-bill/3205. In this way, NIST is a key actor in any AI regulatory regime, though its ongoing funding challenges raise concerns about the risk of regulatory capture.20Frank Lucas et al. to Laurie Locascio, December 14, 2023, Committee on Science, Space, and Technology, Congress of the United States, House of Representatives, https://democrats-science.house.gov/imo/media/doc/2023-12-14_AISI%20scientific%20merit_final-signed.pdf.
Congress and the executive branch oversee the process of government use and procurement of AI systems, which are in essence premarket approval mechanisms for AI systems used by the government. If lawmakers mandate that certain disclosures, tests, and standards accompany use and become part of the procurement process, those standards start to become a type of mandate for the private sector (at least for products and services that are applicable for government use).
In December 2020, the Trump administration’s Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government directed agencies to maintain an inventory of AI use cases and to “design, develop, acquire and use AI” in a responsible manner.21Executive Office of the President, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” Executive Order 13960, Federal Register, December 3, 2020, https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government&sa=D&source=docs&ust=1720637964120451&usg=AOvVaw0YsuDz76AY9MrS-2D0RO8o. In 2022, Congress passed the Advancing American AI Act, which directed the executive branch to issue policies related to “the acquisition and use of artificial intelligence” and the “civil liberties impacts of artificial intelligence-enabled systems.”22James M. Inhofe National Defense Authorization Act for Fiscal Year 2023, H.R. 7776, 117th Cong. (2021–2022), https://www.congress.gov/bill/117th-congress/house-bill/7776/text.
The Biden Administration’s Executive Order on AI
The Executive Order issued under the Trump administration received only partial compliance,23Ben Winters, “Compilation of Federal Govt AI Use Case Inventories,” spreadsheet, accessed July 19, 2024, https://docs.google.com/spreadsheets/d/1FH-fzqwOsifhG-rp-MB7me6W9_XZIbRFkwfQRMObfRs/edit?usp=sharing. leading Biden to follow up with several additional administrative actions,24“Administration Actions on AI,” AI.gov, accessed November 26, 2023, https://ai.gov/actions. including the Blueprint for an AI Bill of Rights25“Blueprint for an AI Bill of Rights,” White House Office of Science and Technology Policy (OSTP), accessed May 10, 2024, https://www.whitehouse.gov/ostp/ai-bill-of-rights. and a subsequent Executive Order on AI that charged agencies across government with a series of tasks tied to increasing AI adoption as well as implementing guardrails.26White House, “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Most notably, Section 7 of Executive Order 14110 recently reminded agencies to “consider opportunities to ensure that their respective civil rights and civil liberties offices are appropriately consulted on agency decisions regarding the design, development, acquisition, and use of AI in Federal Government programs and benefits administration.”27Executive Office of the President, “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government.” Section 10 reemphasizes the provisions of the Advancing American AI Act directing the Office of Management and Budget (OMB) to develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with NIST.
New Oversight for Dual-Use Foundation Models
Section 4 of Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence leverages the Defence Production Act to oversee foundation models.28Executive Office of the President, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Federal Register, October 30, 2023, https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence. See also “Stanford Safe, Secure, and Trustworthy AI EO 14110 Tracker,” spreadsheet, accessed July 19, 2024, https://docs.google.com/spreadsheets/d/1xOL4hkQ2pLR-IAs3awIiXjPLmhIeXyE5-giJ5nT-h1M/edit?usp=sharing; and “AI Exec Order: Human-Readable Edition,” Google doc, accessed July 19, 2024, https://docs.google.com/document/d/1u-MUpA7TLO4rnrhE2rceMSjqZK2vN9ltJJ38Uh5uka4/edit. Section 4 requires companies developing so-called dual-use foundation models29In the executive order, a dual-use foundation model is defined as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters, such as by: (i) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear (CBRN) weapons; (ii) enabling powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or (iii) permitting the evasion of human control or oversight through means of deception or obfuscation.” to provide the federal government with a description of the cybersecurity protections in place to protect model weights and the results of the foundation model’s performance in “AI red-team testing” based on guidance developed by NIST.”30See Section 4.2(a)(i). Additionally, Section 4 includes a supply chain tracking component, requiring entities “that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster.”
Finally, states have their own regulatory oversight through their attorney general’s offices as well as state laws. Notable on this front is a recent release by the California Privacy Protection Agency (CPPA) of draft automated decision-making technology (ADMT) regulations.31CPPA, “Draft Automated Decisionmaking Technology Regulations,” December 2023, https://cppa.ca.gov/meetings/materials/20231208_item2_draft.pdf. While not final, the regulations will likely go into effect sometime in 2024 or 2025. The draft proposes requirements for businesses deploying ADMT for any “decision that produces legal or similarly significant effects concerning a consumer.”32Ibid. “Decision that produces legal or similarly significant effects concerning a consumer” means a decision that results in access to, or the provision or denial of, financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment or independent contracting opportunities or compensation, healthcare services, or essential goods or services.” Requirements include providing users the right to opt out of ADMT and the right to access information/disclosures about a business’ use of ADMT.33State of California, “A New Landmark for Consumer Control Over Their Personal Information: CPPA Proposes Regulatory Framework for Automated Decision-Making Technology,” November 27, 2023, https://cppa.ca.gov/announcements/2023/20231127.html. It is wise to assume that CPPA will continue to set rules regarding the use of personal data in AI systems, and that some of the rules may create changes at the national level.