In December 2023, the Office of Management and Budget outlined a request for public comment to inform its memorandum to the heads of executive departments and agencies on guidelines for AI implementation. In its submission, the AI Now Institute foregrounded the critical need for government AI procurement processes to serve as a key lever of intervention to ensure that AI systems are accountable to the contexts and communities they are meant to serve, especially when these systems mediate or replace public functions.

Our full comment can be read here.

Here are highlights from our submission:


While ensuring that the US government is at the cutting edge is a laudable goal, blanket adoption of AI does not offer a shortcut to innovation. A responsible approach would foreground the need for agencies to critically evaluate whether AI is necessary, appropriate, and fit for a particular task, and whether its use opens up the possibility of other detrimental effects, alongside considering what measures should be implemented to ensure when AI is used, it is deployed in ways that mitigate potential harms.1

We have two broad observations that apply across the Guidance, alongside more specific comments below.

First, elements of the Guidance Memo, as currently structured, could incentivize reckless, wasteful and misguided adoption of AI, as well as an overreliance on waivers to escape obligations. To mitigate this, there needs to be more stringent evaluation of the value proposition of using AI in a particular context, its impact on fair competition, and procedural safeguards to prevent routine waivers from risk assessment. For example, there are many examples where litigation following the reckless adoption of AI ended up costing the agencies and taxpayers more than the ‘efficiencies’ of these technologies saved them.2

We would underscore first and foremost that the benefits of some types of AI use remain untested,3 and in some cases blanket AI adoption mandates can contribute to both harm and waste. In its implementation of the Executive Order, OMB should foreground that the presumption that AI adoption will in all cases enhance operations or efficiency, or will inherently improve innovation, has historically proven to be flawed. In many instances, wholesale adoption mandates for implementation of AI technologies can do the opposite by increasing waste, exacerbating social harms, and simply failing to work as intended.

Second,  it is critical that the OMB’s guidance be crafted to ensure that federal procurement of artificial intelligence technology, a significant buyer in this technology market, is structured to avoid deepening the significant concentration that already exists within the sector.4 Promoting fair competition is an issue that the Biden administration has prioritized in both its Executive Orders on AI and on Competition. To align with this priority, the OMB guidance can be more specific in its competition provisions: we are particularly concerned that absent sufficient measures to ensure otherwise, there is a risk that the mandate for AI adoption across government will ultimately deepen lock-in to dominant firms’ technological ecosystems. 

As we describe with more specificity below, this includes: 

  1. Avoiding infrastructural dependencies to ensure competition and avoid the creation of single points of failure through which risk can disseminate across government.5 Particular measures to prioritize include ensuring that egress fees for data transfers be zero or at cost, and that interoperability mandates ensure that third party services both connect seamlessly and under the same conditions as dominant firms’ products. 
  2. Minimizing firms’ access to data through the implementation of strong data security mandates and evaluating any anti-competitive advantages firms may already have in certain sectors (such as healthcare and finance) as well as advantages they may gain through access to agency data.
  3. Apply competition protections to public investment measures, particularly the pilot of the National AI Research Resource, to ensure that public investments are deliberately structured so the benefits do not directly accrue to already dominant firms rather than serve the interests of the broader public.6
  4. Evaluate levels of corporate dependency on particular firms across government agencies, enabling clear insight in to the level of concentration in adoption of technology services.


  1. Rashida Richardson, “Best Practices for Government Procurement of Dataa-Driven Technologies,” SSRN, June 1, 2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3855637 ↩︎
  2. See, for example, “State of Michigan Announces Settlement of Civil Rights Class Action Alleging False Accusations of Unemployment Fraud, Michigan Department of Attorney General, October 20, 2022, https://www.michigan.gov/ag/news/press-releases/2022/10/20/som-settlement-of-civil-rights-class-action-alleging-false-accusations-of-unemployment-fraud. The AI Now Institute has held several convenings making observations about litigation practices that challenge government use of artificial intelligence systems, see Litigating Algorithms: Challenging Government Use of Algorithmic Decision Systems, Sept. 2018 https://ainowinstitute.org/publication/litigating-algorithms-3 and Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems, Sept. 2019, https://ainowinstitute.org/publication/litigating-algorithms-2019-u-s-report-2↩︎
  3. See, for example, the use of a flawed emotion recognition system by Customs and Border Patrol, a multimillion dollar contract for an AI system that relied on methods disregarded by the scientific community as pseudoscientific: Joseph Cox, “The A.I. Surveillance Tool DHS Uses to Detect ‘Sentiment and Emotion’”, 404 Media, Aug. 24, 2023, https://www.404media.co/ai-surveillance-tool-dhs-cbp-sentiment-emotion-fivecast/↩︎
  4. Nihal Krishan, “Federal gov spending on AI hit $3.3b in fiscal 2022: study,” FedScoop, https://fedscoop.com/us-spending-on-ai-hit-3-3b-in-fiscal-2022/↩︎
  5. See, for example, Gary Gensler’s recent remarks on the systemic risk introduced by the reliance on a limited number of large-scale AI models: Stefania Palma and Patrick Jenkins, “Gary Gensler urges regulators to tame AI risks to financial stability,” Financial Times, Oct. 15, 2023, https://www.ft.com/content/8227636f-e819-443a-aeba-c8237f0ec1ac; the Treasury Department has outlined similar concerns in its cloud report, https://home.treasury.gov/news/press-releases/jy1252, and the Department of Health and Human Services compiles threats to the security of healthcare systems due to its use of cloud computing, including tracking of the effects of cloud outages on health systems: https://www.hhs.gov/sites/default/files/threats-in-healthcare-cloud-computing.pdf↩︎
  6. AI Now Institute and Data & Society Institute, “Democratize AI? How the National AI Research Resource Falls Short”, Oct. 5, 2021, https://ainowinstitute.org/publication/democratize-ai-how-the-proposed-national-ai-research-resource-falls-short. ↩︎