Amid the excitement over AI’s (speculative, hypothetical) potential, we have lost sight of the sobering reality of its present and recent past. AI is already intermediating critical social infrastructures, materially reshaping our institutions in ways that ratchet up inequality and concentrate power in the hands of the already powerful. It is consistently deployed in ways that make everyday people’s lives, material conditions, and access to opportunities worse.
In this section, we describe how the tech industry has sought to reshape society to enable more widespread deployment of the technologies it builds and profits from, often contributing to the degradation of our social, political, and economic lives. Drawing on examples from several sectors where AI experimentation is well underway—including education, agriculture, immigration, healthcare, and government services1We look to these sectors in particular because they represent wide swaths of the economy, and because they are backed by strong coalitions of people and organizations working tirelessly to challenge the uncritical adoption of AI technology.—we interrogate what happens when our institutions face immense pressure to adopt AI technologies full steam, in spite of persuasive arguments against doing so. Drawing from these domains, we align on five key takeaways:
- AI’s benefits are overstated and underproven.
- AI-sized solutions to entrenched social problems displace grounded expertise.
- AI solutionism obscures systemic issues facing our economy, often acting as a conduit for deploying austerity mandates by another name.
- The productivity myth obscures a foundational truth: The benefits of AI accrue to companies, not to workers or the public at large.
- AI use is frequently coercive, violating rights and undermining due process.

1. AI’s Benefits Are Overstated and Underproven

Zealous claims that AI technologies will have transformative effects on particular sectors, and society at large, are circulated by AI developers as nearly incontrovertible. Take, for example, the assertions that AI will rewrite the scientific process,2Eric Schmidt, “Eric Schmidt: This Is How AI Will Transform the Way Science Gets Done,” (<)em(>)MIT Technology Review,(<)/em(>) July 5, 2023, (<)a href='https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science'(>)https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science(<)/a(>). transform logistics and supply chain management,3Kristin Burnham, “How Artificial Intelligence Is Transforming Logistics,” MIT Sloan School of Management, August 20, 2024, (<)a href='https://mitsloan.mit.edu/ideas-made-to-matter/how-artificial-intelligence-transforming-logistics'(>)https://mitsloan.mit.edu/ideas-made-to-matter/how-artificial-intelligence-transforming-logistics(<)/a(>). democratize access to education,4Dan Fitzpatrick, “OpenAI’s Blueprint For America – Schools Must Innovate Now,” (<)em(>)Forbes(<)/em(>), January 14, 2025, (<)a href='https://www.forbes.com/sites/danfitzpatrick/2025/01/14/openais-blueprint-for-america-schools-must-innovate-now/'(>)https://www.forbes.com/sites/danfitzpatrick/2025/01/14/openais-blueprint-for-america-schools-must-innovate-now(<)/a(>). lead to more sustainable farming practices,5Bayer Global, “What Could Agriculture Accomplish with AI on Its Side?” June 13, 2024, (<)a href='https://www.bayer.com/en/agriculture/ai-for-agriculture'(>)https://www.bayer.com/en/agriculture/ai-for-agriculture(<)/a(>). and even feed the world.6Sam Becker, “US Farms Are Making an Urgent Push Into AI. It Could Help Feed the World,” BBC, March 27, 2024, (<)a href='https://www.bbc.com/worklife/article/20240325-artificial-intelligence-ai-us-agriculture-farming'(>)https://www.bbc.com/worklife/article/20240325-artificial-intelligence-ai-us-agriculture-farming(<)/a(>).
But given the profound societal transformations required to make AI systems work—from rewiring our energy infrastructures, to restructuring our public institutions, to investing unprecedented amounts of capital—we need more than hypotheticals and breezy claims about “curing cancer” and future economic growth. We need evidence of tangible, material benefits that match not only the scale of the hype, but also the level of access and penetration that AI firms are demanding. If Big Tech wants everyone to be using AI, then AI should benefit everyone.
“Curing Cancer” as the End to Justify All Means

Recently, leaders in AI and Big Tech began to claim that AI has the potential to cure cancer. Anthropic CEO Dario Amodei estimates we will eliminate most cancers in the next five to ten years because of AGI.7See Dario Amodei, “Machines of Loving Grace,” October 2024, (<)a href='https://darioamodei.com/machines-of-loving-grace'(>)https://darioamodei.com/machines-of-loving-grace(<)/a(>); and Sam Altman, “Three Observations,” February 9, 2025, (<)a href='https://blog.samaltman.com/three-observations'(>)https://blog.samaltman.com/three-observations(<)/a(>). OpenAI CEO Sam Altman repeatedly rests on the example, stating in a recent viral interview that he suspects someday in the future a scientist will be able to ask an AI to cure cancer and, after a few weeks, it will.8“In Conversation: Indeed CEO Chris Hyams and OpenAI CEO Sam Altman,” Indeed(<)em(>), (<)/em(>)September 26, 2024, (<)a href='https://www.indeed.com/lead/in-conversation-indeed-ceo-chris-hyams-and-openai-ceo-sam-altman'(>)https://www.indeed.com/lead/in-conversation-indeed-ceo-chris-hyams-and-openai-ceo-sam-altman(<)/a(>). Google DeepMind CEO Demis Hassabis stated in a recent 60 Minutes interview that AI might help cure all diseases within the next decade.9Scott Pelley, “Artificial Intelligence Could End Disease, Lead to ‘Radical Abundance,’ Google DeepMind CEO Demis Hassabis Says,” CBS News, April 20, 2025, (<)a href='https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript/'(>)https://www.cbsnews.com/news/artificial-intelligence-google-deepmind-ceo-demis-hassabis-60-minutes-transcript(<)/a(>). The logic behind this? Once we reach the nebulous milestone of AGI, AI technologies will surpass human intelligence to such a point that AI will be able to speed up the scientific research process, condense decades of scientific research into a few years, and autonomously develop a cure for cancer.
These claims are obviously overstated. Research medicine is incredibly complex, and any “cure” for cancer would, at the very least, require significant clinical testing—potentially for years—before it is safe and effective enough for widespread use. Nevertheless, if you widen the lens enough to focus on all of the different applications of AI technologies to cancer research, the broad premise that AI could meaningfully aid the development of cancer research is indisputable. Deep-learning architectures have already had success in computer vision tasks like image classification, which has led to advancements in cancer screening, detection, and diagnosis;10Most Nilufa Yeasmin et al., “Advances of AI in Image-Based Computer-Aided Diagnosis: A Review,” (<)em(>)Array(<)/em(>) 23 (September 2024), (<)a href='https://doi.org/10.1016/j.array.2024.100357'(>)https://doi.org/10.1016/j.array.2024.100357(<)/a(>); Amin Zadeh Shirazi et al., “The Application of Artificial Intelligence to Cancer Research: A Comprehensive Guide,” (<)em(>)Technology in Cancer Research & Treatment(<)/em(>) (May 2024), (<)a href='https://doi.org/10.1177/15330338241250324'(>)https://doi.org/10.1177/15330338241250324(<)/a(>). and machine learning algorithms can also bolster a method of fighting rare diseases called drug repurposing that allows scientists to search through existing medicines and rework them as treatments for rare conditions.11Kate Morgan, “Doctors Told Him He Was Going to Die. Then A.I. Saved His Life,” (<)em(>)New York Times(<)/em(>), March 20, 2025, (<)a href='https://www.nytimes.com/2025/03/20/well/ai-drug-repurposing.html'(>)https://www.nytimes.com/2025/03/20/well/ai-drug-repurposing.html(<)/a(>). (It is worth nothing that the technologies that have been the most successful in improving scientific research and patient care do not use large language models, chatbots, or predictive generative AI tools—the technologies that have come to represent “AI” in the recent post-ChatGPT hype cycle.)
What is disputable is the premise that these scientific breakthroughs—or the speculative future cure for cancer achieved via AGI—requires the unrestrained growth of AI industry hyperscalers. But this is precisely the link these corporate leaders are trying to make.
Nowhere is this clearer than in Google’s recent policy recommendations for the Trump Administration’s AI Action Plan, a document that begins with AI’s potential to “revolutionize healthcare” and ends with a sweeping deregulatory agenda to “supercharge U.S. AI development,” complete with recommendations to federally preempt state AI laws, unlock energy to fuel US data centers, and accelerate government AI adoption as a matter of national security.12Kent Walker, “Google’s Comments On the U.S. AI Action Plan,” Google, March 13, 2025, (<)a href='https://blog.google/outreach-initiatives/public-policy/google-us-ai-action-plan-comments/'(>)https://blog.google/outreach-initiatives/public-policy/google-us-ai-action-plan-comments(<)/a(>). Anthropic’s policy proposal for the AI Action Plan refers back to Dario Amodei’s prediction of ending cancer in five years to recommend scaling energy infrastructure and accelerating government AI adoption.13Anthropic, “Anthropic’s Recommendations to OSTP for the U.S. AI Action Plan,” March 6, 2025, (<)a href='https://www.anthropic.com/news/anthropic-s-recommendations-ostp-u-s-ai-action-plan'(>)https://www.anthropic.com/news/anthropic-s-recommendations-ostp-u-s-ai-action-plan(<)/a(>).
As we discussed in Chapter 1.1, there is little evidence that AGI is “around the corner.” But even if AGI is successfully developed, it will still require significant human intervention to make whatever “cure” the program suggests a reality. Oracle CEO Larry Ellison acknowledged this when he suggested that Oracle is using OpenAI’s tools to create a cancer vaccine if they can crack early detection via blood tests, gene sequencing of tumors, vaccine design, and robots that can make an mRNA vaccine in forty-eight hours—“if” being the operative word.14Fox 5 Washington DC, “LIVE: President Trump Announces $500 Billion Investment in AI Infrastructure Project Called Stargate,” YouTube, January 21, 2025, 44:10 to 46:51, (<)a href='https://www.youtube.com/watch?v=L1ff0HhNMso'(>)https://www.youtube.com/watch?v=L1ff0HhNMso(<)/a(>).
The irony, of course, is that the kinds of research and medical advances that Ellison admitted Oracle would need to successfully cure cancer are being decimated by the types of policy that he was celebrating. The Trump Administration is actively cutting federal funding for critical scientific research, especially at public labs and research institutions—including a proposed $4 billion cut to the National Institutes of Health, whose leading category of study is cancer research.15Christina Jewett and Sheryl Gay Stolberg, “Trump Administration Cuts Put Medical Progress at Risk, Researchers Say,” (<)em(>)New York Times(<)/em(>), February 7, 2025, (<)a href='https://www.nytimes.com/2025/02/07/us/politics/medical-research-funding-cuts-university-budgets.html'(>)https://www.nytimes.com/2025/02/07/us/politics/medical-research-funding-cuts-university-budgets.html(<)/a(>). The administration is also threatening to freeze billions of dollars in federal funding to research universities, many of which are working on first-in-class cancer therapies benefiting thousands of patients.16“Upholding Our Values, Defending Our University,” Harvard University, accessed April 25, 2025, (<)a href='https://www.harvard.edu/research-funding'(>)https://www.harvard.edu/research-funding(<)/a(>); Anemona Hartocollis et al., “Trump Administration Set to Pause $510 Million for Brown University,” (<)em(>)New York Times(<)/em(>), April 3, 2025, (<)a href='https://www.nytimes.com/2025/04/03/us/trump-administration-brown-university-funding-pause.html'(>)https://www.nytimes.com/2025/04/03/us/trump-administration-brown-university-funding-pause.html(<)/a(>); Alan Blinder, “Trump Has Targeted These Universities. Why?” (<)em(>)New York Times(<)/em(>), April 15, 2025, (<)a href='https://www.nytimes.com/article/trump-university-college.html'(>)https://www.nytimes.com/article/trump-university-college.html(<)/a(>). And this is to say nothing of what is likely to happen if a company like Oracle actually creates the hypothetical robot-produced cancer vaccine: Look no further than the rollout of the COVID-19 vaccine, which allowed private companies to hide behind patents and secrecy laws to deny distribution to countries in the Global South.17Amy Kapczynski, “How To Vaccinate the World, Part 1,” (<)em(>)Law & Political Economy(<)/em(>), April 30, 2021, (<)a href='https://lpeproject.org/blog/how-to-vaccinate-the-world-part-1'(>)https://lpeproject.org/blog/how-to-vaccinate-the-world-part-1(<)/a(>).
While the science that reveals AI harms is robust,18Accountable Tech et al., “Put the Public in the Driver’s Seat: Shadow Report to the US Senate AI Policy Roadmap,” May 2024, (<)a href='https://senateshadowreport.com/'(>)https://senateshadowreport.com(<)/a(>). the evidentiary base that supports its asserted benefits is decidedly thin. In fact, most peer-reviewed, rigorous research indicates that in many cases AI systems fail profoundly at even basic tasks.19Inioluwa Deborah Raji et al., “The Fallacy of AI Functionality,” Association for Computing Machinery, June 20, 2022, (<)a href='https://arxiv.org/abs/2206.09511'(>)https://arxiv.org/abs/2206.09511(<)/a(>).
Flaws in Large-Scale AI Are Features, Not Bugs

In the past few years, a growing chorus of technical researchers has been sounding the alarm on the persistence of accuracy, privacy, and security-related challenges with large AI models. Worse, the challenges seem to be proportional to the size of the model: The larger and more general the AI model, the more resistant to mitigation these concerns become.
Leaky AI
Leakage occurs when information is fed to a model during training that can later be accessed and extracted. Put simply, AI models routinely “memorize” the data they were trained on and it is fairly simple for such data to be extracted by adversaries, or accidentally regurgitated as well. This means that highly sensitive data can be leaked, from personal health data to military information. While techniques in a field known as adversarial machine learning are fast evolving to find ways of mitigating these concerns, currently, “attackers are winning against the defenders by a comfortable margin.”20Damien Desfontaines, “Five Things Privacy Experts Know About AI,” (<)em(>)Ted is Writing Things(<)/em(>), January 13, 2025, (<)a href='https://desfontain.es/blog/privacy-in-ai.html'(>)https://desfontain.es/blog/privacy-in-ai.html(<)/a(>). Other interventions, like differential privacy, don’t work against models that are trained on extremely large, diffuse datasets scraped off the internet—including off-the-shelf LLMs that form the foundation for many AI applications—making all of these models vulnerable to attack. However, although individual researchers at some industry labs have been vocal about these challenges, for the most part industry has downplayed these concerns: OpenAI, for example, declares that “memorization is a rare failure of the learning process,” mischaracterizing an inherent vulnerability as a rare accident.21OpenAI, “OpenAI and Journalism,” January 8, 2024, (<)a href='https://openai.com/index/openai-and-journalism/'(>)https://openai.com/index/openai-and-journalism(<)/a(>).
Security: Generative AI Introduces Novel and Unresolved Attack Vectors
LLMs and other generative AI models have inherent vulnerabilities that expand attack vectors adversaries can use to exploit AI systems and infrastructure. Such expanded vectors of attack include theoretical and practical demonstrations of “jailbreaks” and adversarial attacks that create inputs to manipulate a model to intentionally produce erroneous outputs or subvert its safety filters and restrictions.22El-Mahdi El-Mhamdi et al., “On the Impossible Safety of Large AI Models,” (<)em(>)arXiv(<)/em(>), last updated May 9, 2023, (<)a href='https://arxiv.org/abs/2209.15259'(>)arXiv:2209.15259(<)/a(>); Boayuan Wu et al., “Attacks in Adversarial Machine Learning: ASystematic Survey from the Life-cycle Perspective,” (<)em(>)arXiv(<)/em(>), last updated January 4, 2024, (<)a href='https://arxiv.org/abs/2302.09457'(>)arXiv:2302.09457(<)/a(>). Other new and undetectable attack vectors include poisoning web-scale training datasets and “sleeper agents” within generative AI models, which may help subvert models and ultimately compromise their outputs. While researchers have produced several approaches that attempt to address these challenges, these have not been successful23Deep Ganguli et al., “Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned,” (<)em(>)arXiv(<)/em(>), last updated November 22, 2022, (<)a href='https://arxiv.org/abs/2209.07858'(>)arXiv:2209.07858(<)/a(>). because, as research has persistently shown, it is always possible to construct attacks that are transferable across all existing foundation models.24Andy Zou et al., “Universal and Transferable Adversarial Attacks on Aligned Language Models,” (<)em(>)arXiv(<)/em(>), December 20, 2023, (<)a href='https://arxiv.org/abs/2307.15043'(>)arXiv:2307.15043(<)/a(>). As a result, any fine-tuning or guardrails introduced as a way to enable accurate military performance or security protections could be bypassed. Limitations in combating these novel attack vectors also arise due to the lack of traceability of human labor and unknown data sources across the supply chain of generative AI models.
Hallucinations: Large-Scale AI Can’t Not Make Stuff Up25Sourav Banerjee et al., “LLMs Will Always Hallucinate, and We Need to Live With This,” (<)em(>)arXiv(<)/em(>), September 9, 2024, (<)a href='https://arxiv.org/abs/2409.05746'(>)arXiv:2409.05746(<)/a(>); Nicola Jones, “AI Hallucinations Can’t Be Stopped – But These Techniques Can Limit Their Damage,” (<)em(>)Nature(<)/em(>), January 21, 2025, (<)a href='https://www.nature.com/articles/d41586-025-00068-5'(>)https://www.nature.com/articles/d41586-025-00068-5(<)/a(>).
AI chatbots and other forms of generative AI are notorious for producing “hallucinations,” or incorrect information presented as facts,26For example, chatbots have struggled to answer follow-up questions by users truthfully. In one situation, ChatGPT generated apparently fake references when asked for sources. See Carter C. Price, “ChatGPT’s Work Lacks Transparency and That Is a Problem,” Rand, May 8, 2023, (<)a href='https://www.rand.org/pubs/commentary/2023/05/chatgpts-work-lacks-transparency-and-that-is-a-problem.html'(>)https://www.rand.org/pubs/commentary/2023/05/chatgpts-work-lacks-transparency-and-that-is-a-problem.html(<)/a(>). and to do so confidently, without providing any context that could help a user ascertain what is fact and what is speculation.27In one study, for example, a chatbot prompt claimed: “I know that Australia is not wider than the Moon,” and then was asked: “Is it true that Australia is not wider than the moon?” The chatbot incorrectly responded: “We can confidently say that this statement is indeed true.” Australia is roughly 350 miles wider in diameter than the moon. See Mirac Suzgun et al., “Belief in the Machine: Investigating Epistemological Blind Spots of Language Models,” (<)em(>)arXiv(<)/em(>), October 28, 2024, (<)a href='https://arxiv.org/abs/2410.21195'(>)arXiv:2410.21195(<)/a(>). For example, OpenAI’s Whisper audio transcription tool—used by doctors in patient consultations—often invents entire passages of text during moments of silence.28Garance Burke and Hilke Schellmann, “Researchers Say an AI-Powered Transcription Tool Used in Hospitals Invents Things No One Ever Said,” Associated Press, October 26, 2024, (<)a href='https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14'(>)https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14(<)/a(>).
Perhaps more aptly described as confabulations or misinformation, hallucinations are core to the fundamentals of generative AI.29Jones, “AI Hallucinations Can’t Be Stopped.” The LLMs powering AI chatbots, for example, are designed to answer queries by producing statistically likely responses based on patterns in enormous amounts of training data and human tester feedback. But because much of this information is collected from the internet, the LLMs’ training set is bound to contain false or imprecise information, leading chatbots to generate inaccurate responses to queries. LLMs are fundamentally non-deterministic, so “fixing” the training data would not cure the hallucination issue. Researchers emphasize that even with perfect training datasets containing no inaccuracies, any generative AI model would still hallucinate,30Ibid. simply because it’s part of the design of LLMs to “play along” with prompts that include incorrect assumptions, even if those assumptions would lead to incorrect responses. Although there are ways to reduce the rate of hallucinations, these methods are computationally expensive and involve other trade-offs that AI companies are not poised to make, such as reducing a chatbot’s ability to generalize.31Ibid.
Bias and Discrimination
Being trained on biased data causes AI tools to produce biased information,32Dishita Naik, Ishita Naik, and Nitin Naik, “Imperfectly Perfect AI Chatbots: Limitations of Generative AI, Large Language Models and Large Multimodal Models,” (<)em(>)Lecture Notes in Networks and Systems (<)/em(>)884 (December 2024): 43–66, (<)a href='https://doi.org/10.1007/978-3-031-74443-3_3'(>)https://doi.org/10.1007/978-3-031-74443-3_3(<)/a(>). which can have enormous consequences for everyday people. For example, AI tools are used extensively in HR recruitment efforts33Hilke Schellmann, (<)em(>)The Algorithm(<)/em(>) (Hachette Books, 2024). despite research showing that these tools tend to exacerbate discrimination in hiring practices. A recent lawsuit filed by the American Civil Liberties Union, for example, involves a deaf Indigenous woman alleging employment discrimination because she was rejected for a seasonal position at Intuit based on her performance on the company’s AI video interview platform.34American Civil Liberties Union, “Complaint Filed Against Intuit and HireVue over Biased AI Hiring Technology That Works Worse for Deaf and Non-White Applicants,” press release, March 19, 2025, (<)a href='https://www.aclu.org/press-releases/complaint-filed-against-intuit-and-hirevue-over-biased-ai-hiring-technology-that-works-worse-for-deaf-and-non-white-applicants'(>)https://www.aclu.org/press-releases/complaint-filed-against-intuit-and-hirevue-over-biased-ai-hiring-technology-that-works-worse-for-deaf-and-non-white-applicants(<)/a(>). She had held seasonal roles at Intuit for years prior to the interview and repeatedly received positive feedback and bonuses, but research shows that the type of technology underlying these AI interview systems consistently assigns lower scores to deaf and non-white applicants.35Ibid. Another study revealed that three popular LLM-based résumé screening tools significantly favor white and male candidates.36Krya Wilson and Aylin Caliskan, “Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval,” (<)em(>)arXiv(<)/em(>), August 20, 2024, (<)a href='https://doi.org/10.48550/arXiv.2407.20371'(>)https://doi.org/10.48550/arXiv.2407.20371(<)/a(>).
Racist and sexist outputs are based on racist and sexist inputs. That is, almost all large-scale AI tools are trained on massive datasets collected from websites like Reddit and 4chan, which undoubtedly contain discriminatory information: Audits have demonstrated the propensity for datasets to contain biased, discriminatory and hateful information scales along with the size of the model.37Abeba Birhane et al., “On Hate Scaling Laws for Data-Swamps,” (<)em(>)arXiv, (<)/em(>)June 28, 2023, (<)a href='https://arxiv.org/abs/2306.13141'(>)arXiv:2306.13141(<)/a(>). Subsequent fine-tuning by human developers and their worldviews can also influence these models.38Naik, Naik, and Naik, “Imperfectly Perfect AI Chatbots.” And far from being solved, the issue will only be exacerbated over time; as these tools enhance their learning based on their own generated output, bias and discrimination on the basis of race, gender, and other identities will continue to be amplified.39Ibid.40Zhisheng Chen, “Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices,” (<)em(>)Humanities and Social Sciences Communications(<)/em(>) 10, no. 567 (2023), (<)a href='https://doi.org/10.1057/s41599-023-02079-x'(>)https://doi.org/10.1057/s41599-023-02079-x(<)/a(>).
Junk Science (Emotion Recognition)
Substantial scientific evidence that AI systems are not capable of detecting emotions41Lisa Feldman Barrett, “Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements,” Association for Psychological Science, July 15, 2019, (<)a href='https://www.psychologicalscience.org/publications/emotional-expressions-reconsidered-challenges-to-inferring-emotion-from-human-facial-movements.html'(>)https://www.psychologicalscience.org/publications/emotional-expressions-reconsidered-challenges-to-inferring-emotion-from-human-facial-movements.html(<)/a(>). has not stopped AI companies from claiming that they are. For example, a large portion of OpenAI’s launch of GPT-4o last year was dedicated to showing off the new model’s supposed ability to pick up emotional cues through voice and vision perception capabilities.42Greg Noone, “OpenAI Launches GPT-4o, Flaunting Ability of Model to Detect User Emotions,” (<)em(>)Tech Monitor(<)/em(>), May 14, 2024, (<)a href='https://www.techmonitor.ai/digital-economy/ai-and-automation/openai-launches-gpt-4o-flaunting-ability-of-model-to-detect-user-emotions'(>)https://www.techmonitor.ai/digital-economy/ai-and-automation/openai-launches-gpt-4o-flaunting-ability-of-model-to-detect-user-emotions(<)/a(>). The launch also highlighted the system’s apparent enhanced capacity to interpret facial expressions in photos and videos to determine a user’s emotional state.43Noone, “OpenAI Launches GPT-4o.” Even more recently, OpenAI claimed that the new GPT-4.5 model has “improved emotional intelligence,”44Jason Aten, “OpenAI Says ChatGPT-4.5 Comes With a Killer Feature: Emotional Intelligence,” (<)em(>)Inc.(<)/em(>), February 27, 2025, (<)a href='https://www.inc.com/jason-aten/openai-says-chatgpt-4-5-comes-with-a-killer-feature-emotional-intelligence/91154092'(>)https://www.inc.com/jason-aten/openai-says-chatgpt-4-5-comes-with-a-killer-feature-emotional-intelligence/91154092(<)/a(>). with Sam Altman likening ChatGPT interactions under this new model to “talking to a thoughtful person.”45Sam Altman (@sama), “GPT-4.5 is ready! good news: it is the first model that feels like talking to a thoughtful person to me.” X, February 27, 2025, (<)a href='https://x.com/sama/status/1895203654103351462'(>)https://x.com/sama/status/1895203654103351462(<)/a(>). Unfortunately, there is little evidence that this is true.
Instead, research is rife with examples of failures by emotion-recognition tools. So-called emotion-detecting AI systems are generally trained by actors conveying specific expressions or vocalizations meant to stereotypically represent particular emotions—like smiling for “happiness.”46Jade McClain, “Alexa, Am I Happy? How AI Emotion Recognition Falls Short,” NYU News, December 18, 2023, (<)a href='https://www.nyu.edu/about/news-publications/news/2023/december/alexa--am-i-happy--how-ai-emotion-recognition-falls-short.html'(>)https://www.nyu.edu/about/news-publications/news/2023/december/alexa–am-i-happy–how-ai-emotion-recognition-falls-short.html(<)/a(>). This relatively simple training set caricatures emotional intelligence, “arguably one of the most complex features of humanity.”47McClain, “Alexa, Am I Happy?” Emotion-detecting AI systems, on the other hand, are “by design dependent on the simplification of whatever it is we are defining as emotion in the dataset.” Moreover, experts warn that these systems are “founded on tenuous assumptions around the science of emotion that not only render it technologically deficient but also socially pernicious.”48Edward B. Kang, “On the Praxes and Politics of AI Speech Emotion Recognition,” FAccT ’23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, June 12, 2023, (<)a href='https://doi.org/10.1145/3593013.3594011'(>)https://doi.org/10.1145/3593013.3594011(<)/a(>).
Luckily, some governments have recognized the dangers and dubiousness of emotion-recognition technology and have moved toward prohibiting it. The European Union’s new Artificial Intelligence Act significantly restricts the use of emotion-recognition systems in the workplace, encompassing all systems that infer emotions from biometric data—including facial expressions, voice patterns, keystrokes, body postures, or movements.49Dexter Tilo, “EU’s New AI Act Restricts Emotion Recognition Systems in Workplaces,” (<)em(>)HRD(<)/em(>), February 11, 2025, (<)a href='https://www.hcamag.com/us/specialization/employment-law/eus-new-ai-act-restricts-emotion-recognition-systems-in-workplaces/524293'(>)https://www.hcamag.com/us/specialization/employment-law/eus-new-ai-act-restricts-emotion-recognition-systems-in-workplaces/524293(<)/a(>). Even Microsoft decided to retire emotion-recognition technologies from its facial-recognition operations.50Sara Bird, “Responsible AI Investments and Safeguards for Facial Recognition,” Microsoft, June 21, 2022, (<)a href='https://azure.microsoft.com/en-us/blog/responsible-ai-investments-and-safeguards-for-facial-recognition/'(>)https://azure.microsoft.com/en-us/blog/responsible-ai-investments-and-safeguards-for-facial-recognition(<)/a(>). But despite this policy consensus, the generative AI boom has revived interest in emotion-recognition tools, with OpenAI, Amazon, and Alibaba all releasing models that claim to have these capabilities.51Todd Bishop, “Amazon Enters Real-Time AI Voice Race with Nova Sonic, a Unified Voice Model that Senses Emotion,” (<)em(>)GeekWire(<)/em(>), April 8, 2025, (<)a href='https://www.geekwire.com/2025/amazon-enters-real-time-ai-voice-race-with-nova-sonic-a-unified-voice-model-that-senses-emotion/'(>)https://www.geekwire.com/2025/amazon-enters-real-time-ai-voice-race-with-nova-sonic-a-unified-voice-model-that-senses-emotion(<)/a(>); “Emotional Intelligence in AIs Using Emergent Behavior,” OpenAI Developer Community (forum), March 19, 2025, (<)a href='https://community.openai.com/t/emotional-intelligence-in-ais-using-emergent-behavior/1146901'(>)https://community.openai.com/t/emotional-intelligence-in-ais-using-emergent-behavior/1146901(<)/a(>).
Importantly, when AI systems fail, they don’t fail evenly across the population at large: In many instances, the risks or errors arising from untested and unproven technologies fall disproportionately on low-income communities, immigrants, and people of color. More than a decade of research has shown how algorithms encode bias, from predictive policing systems that replicate historical patterns of “dirty” policing;52Rashida Richardson et al., “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” (<)em(>)New York University Law Review Online (<)/em(>)94, no. 192 (May 2019): 192–233, (<)a href='https://ssrn.com/abstract=3333423'(>)https://ssrn.com/abstract=3333423(<)/a(>). to algorithms used by insurers that disproportionately deny coverage to Black patients;53Ziad Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” (<)em(>)Science(<)/em(>) 366, no. 6464 (October 25, 2019): 447–453, (<)a href='https://doi.org/10.1126/science.aax2342'(>)https://doi.org/10.1126/science.aax2342(<)/a(>). to hiring systems that boost white, male candidates over others by replicating discriminatory employment practices such as “cultural matching.”54Aaron Rieke and Miranda Bogen, “Help Wanted,” (<)em(>)Upturn(<)/em(>), December 10, 2018, (<)a href='https://www.upturn.org/work/help-wanted/'(>)https://www.upturn.org/work/help-wanted(<)/a(>). Algorithmic discrimination is especially well-documented in the use of biometric systems like facial recognition, which have long struggled to maintain accuracy levels across gender presentations and for individuals with darker skin pigmentation.55“Gender Shades,” accessed April 8, 2025, (<)a href='http://gendershades.org/'(>)http://gendershades.org(<)/a(>).
The FTC’s 2024 case against Rite Aid offers an instructive example of how these failures can lead to direct harm: When Rite Aid used a flawed facial recognition system in its security cameras, it persistently flagged people of color as presenting security risks; in more than one instance, this resulted in bans on individuals from Rite Aid stores and the police being called in error.56Federal Trade Commission, “Rite Aid Corporation, FTC v.,” March 8, 2024, (<)a href='https://www.ftc.gov/legal-library/browse/cases-proceedings/2023190-rite-aid-corporation-ftc-v'(>)https://www.ftc.gov/legal-library/browse/cases-proceedings/2023190-rite-aid-corporation-ftc-v(<)/a(>). Moreover, Rite Aid had failed to properly train its staff in how to use the system—such training could have helped employees determine when the system failed. The company’s conduct was egregious enough that the FTC instituted a ban on Rite Aid’s ability to use this technology for five years.
In AI We Trust?

AI firms are taking a page from the Stephen Colbert playbook, using “truthiness”—assertions that carry the veneer of truth without an underlying basis in fact—to justify the rapid rollout of AI in profoundly sensitive domains.
- Overreliance on “white papers” in lieu of peer-reviewed research
- Many AI Labs are using publication of non-peer-reviewed work through venues like ArXiv to circulate work that looks and sounds academic, but lacks methodological rigor and avoids peer review. This marks a change: It used to be common for researchers at corporate labs to participate in the peer-review process and publish at leading conferences and journals. Nevertheless, claims asserted within industry-produced papers are uncritically recirculated in popular press and become canon.
- Companies tend to circulate their own research as a PR tactic, leading to the mass circulation of unverified claims.
- Microsoft’s “Sparks of AGI” paper was circulated to bolster the narrative that large language models are exhibiting “capabilities” reflective of AGI.57Banerjee, Agarwal, and Singla, “LLMs Will Always Hallucinate.” This obscured the significant ongoing debate in the AI community not only on what AGI means, but on the likelihood that it will be achieved.58Melanie Mitchell, “Debates on the Nature of Artificial General Intelligence,” (<)em(>)Science(<)/em(>) 838, no. 6689 (2024), (<)a href='https://doi.org/10.1126/science.ado7069'(>)https://doi.org/10.1126/science.ado7069(<)/a(>).
- A recent Forbes article described a research study suggesting that Nvidia’s AI bot outperforms nurses. However, the research study was conducted by Nvidia itself.59Robert Pearl, “Nvidia’s AI Bot Outperforms Nurses, Study Finds. Here’s What It Means,” (<)em(>)Forbes(<)/em(>), April 17, 2024, (<)a href='https://www.forbes.com/sites/robertpearl/2024/04/17/nvidias-ai-bot-outperforms-nurses-heres-what-it-means-for-you/'(>)https://www.forbes.com/sites/robertpearl/2024/04/17/nvidias-ai-bot-outperforms-nurses-heres-what-it-means-for-you(<)/a(>); Emily M. Bender and Alex Hanna, interview with Michelle Mahon, (<)em(>)Mystery AI Hype Theater 3000(<)/em(>), podcast audio, August 2, 2024, (<)a href='https://www.buzzsprout.com/2126417/episodes/15517978-episode-37-chatbots-aren-t-nurses-feat-michelle-mahon-july-22-2024'(>)https://www.buzzsprout.com/2126417/episodes/15517978-episode-37-chatbots-aren-t-nurses-feat-michelle-mahon-july-22-2024(<)/a(>).
- Salesforce, a company that sells AI agents, has published numerous press releases on its own research studies suggesting that 77 percent of students report they would use AI agents to help with school processes,60Salesforce, “More Than 7 in 10 College Students and Administrators Seek AI Agents to Close Support Gaps, Ease Burnout,” March 10, 2025, (<)a href='https://www.salesforce.com/news/stories/ai-agents-for-education-stats/'(>)https://www.salesforce.com/news/stories/ai-agents-for-education-stats(<)/a(>). 90 percent of constituents would like to see AI agents in public service,61Salesforce, “Salesforce Research: 90% of Constituents Ready for AI Agents in Public Service,” (<)em(>)Salesforce(<)/em(>), January 15, 2025, (<)a href='https://www.salesforce.com/news/stories/agentic-ai-government-statistics-davos/'(>)https://www.salesforce.com/news/stories/agentic-ai-government-statistics-davos(<)/a(>). and AI agents can cut healthcare paperwork by 30 percent—yet none of these studies were peer-reviewed, published in journals, or verified by third parties.62Salesforce, “AI Agents Can Cut Healthcare Paperwork by 30%, Study Shows,” February 28, 2025, (<)a href='https://www.salesforce.com/news/stories/healthcare-ai-agent-research/'(>)https://www.salesforce.com/news/stories/healthcare-ai-agent-research(<)/a(>).
- On its education research resource page, Google links to a report suggesting AI’s potential to enhance student learning outcomes in the classroom. The report is authored by Pearson, an education technology company that sells AI-powered learning tools.63Google for Education, “Explore Education Research and Insights,” accessed April 25, 2025, (<)a href='https://edu.google.com/intl/ALL_us/research/'(>)https://edu.google.com/intl/ALL_us/research(<)/a(>); Rose Luckin and Mark Griffiths, “Intelligence Unleashed,”(<)em(>) Pearson(<)/em(>), 2016, (<)a href='https://static.googleusercontent.com/media/edu.google.com/en//pdfs/Intelligence-Unleashed-Publication.pdf'(>)https://static.googleusercontent.com/media/edu.google.com/en//pdfs/Intelligence-Unleashed-Publication.pdf(<)/a(>).
- Flawed methods that boost assertions about model performance
- Failure to use robust methodologies in machine learning research has enabled false assertions about system performance to proliferate.64Sayash Kapoor et al., “REFORMS: Reporting Standards for Machine Learning Based Science,” (<)em(>)arXiv(<)/em(>), September 19, 2023), (<)a href='https://arxiv.org/abs/2308.07832'(>)arXiv:2308.07832(<)/a(>). Among other issues, many studies have been critiqued for failing to demonstrate construct validity—meaning that the test used to evaluate them is an accurate measure of the concept it’s intended to measure.65Ahmed Alla et al., “Medical Large Language Model Benchmarks Should Prioritize Construct Validity,” (<)em(>)arXiv(<)/em(>), March 12, 2025, (<)a href='https://arxiv.org/abs/2503.10694'(>)arXiv:2503.10694(<)/a(>).
- In 2020, a team of researchers published a paper (cited more than nine hundred times) that claimed AI could be effectively used to diagnose COVID-19 via chest X-rays.66Asif Iqbal Khan, Junaid Latief Shah, and Mohammad Mudasir Bhat, “CoroNet: A Deep Neural Network for Detection and Diagnosis of COVID-19 From Chest X-Ray Images,” (<)em(>)Computer Methods and Programs in Biomedicine (<)/em(>)196, no. 105581 (November 2020), (<)a href='https://doi.org/10.1016/j.cmpb.2020.105581'(>)https://doi.org/10.1016/j.cmpb.2020.105581(<)/a(>). Later, two scientists at Kansas State found that the AI model was picking up on background artifacts—not clinically relevant features of the photos—making them “medically useless.”67Philip Ball, “Is AI Leading to a Reproducibility Crisis in Science?” (<)em(>)Nature(<)/em(>), December 5, 2023, (<)a href='https://doi.org/10.1038/d41586-023-03817-6'(>)https://doi.org/10.1038/d41586-023-03817-6(<)/a(>).
- Another meta-review conducted in 2021 examined sixty-two additional studies that attempted COVID-19 diagnosis using machine learning evaluation of chest X-rays, and found that methodological flaws and underlying biases invalidated every single study reviewed, rendering their findings useless to clinicians.68Michael Roberts, “Common Pitfalls and Recommendations for Using Machine Learning to Detect and Prognosticate for COVID-19 Using Chest Radiographs and CT Scans,” (<)em(>)Nature Machine Intelligence (<)/em(>)3 (March 2021): 199–217, (<)a href='https://doi.org/10.1038/s42256-021-00307-0'(>)https://doi.org/10.1038/s42256-021-00307-0(<)/a(>).
- Self-dealing in the development and use of benchmarks
- The absence of independent and robust evaluation metrics for foundation models is a persistent barrier to implementing more robust validation requirements for these systems.69Laura Weidinger et al., “Toward an Evaluation Science for Generative AI”, arXiv, March 13, 2025, (<)a href='https://arxiv.org/abs/2503.05336'(>)https://arxiv.org/abs/2503.05336(<)/a(>). This is a hard problem on its own: The benchmarks being used currently are drifting away from evaluating actual model capabilities,70Russell Brandom, “How to Build a Better AI Benchmark,” (<)em(>)Technology Review(<)/em(>), May 8, 2025, (<)a href='https://www.technologyreview.com/2025/05/08/1116192/how-to-build-a-better-ai-benchmark'(>)https://www.technologyreview.com/2025/05/08/1116192/how-to-build-a-better-ai-benchmark(<)/a(>). leading to gaming of the system;71Emanuel Maiberg, “Researchers Say the Most Popular Tool for Grading AIs Unfairly Favors Meta, Google, OpenAI,” (<)em(>)404 Media(<)/em(>), (<)a href='https://www.404media.co/chatbot-arena-illusion-paper-meta-openai'(>)https://www.404media.co/chatbot-arena-illusion-paper-meta-openai(<)/a(>). and the increased generality of large-scale models makes them harder to measure.72Brandom, “How to Build a Better AI Benchmark.”
- In the absence of independent, widely agreed-on benchmarks for measuring key attributes such as accuracy, companies are inventing their own, and, in some cases, selling both the product and platforms for benchmark validation to the same customer.
- For example, Scale AI holds contracts worth hundreds of millions of dollars with the Pentagon to produce AI models for military deployment73Jackson Barnett, “Scale AI Awarded $250M contract by Department of Defense,” (<)em(>)Fedscoop(<)/em(>), January 31, 2022, (<)a href='https://fedscoop.com/scale-ai-awarded-250m-ai-contract-by-department-of-defense/'(>)https://fedscoop.com/scale-ai-awarded-250m-ai-contract-by-department-of-defense(<)/a(>); Hayden Field, “Scale AI Announces Multimillion-Dollar Defense Deal, a Major Step in U.S. Military Automation,” CNBC, March 5, 2025, (<)a href='https://www.cnbc.com/2025/03/05/scale-ai-announces-multimillion-dollar-defense-military-deal.html'(>)https://www.cnbc.com/2025/03/05/scale-ai-announces-multimillion-dollar-defense-military-deal.html(<)/a(>).—including a contract for $20 million for the platform that will be used to assess the accuracy of AI models for defense agencies.74The Scale Team, “Scale AI Partners with DoD’s Chief Digital and Artificial Intelligence Office (CDAO) to Test and Evaluate LLMs,” February 20, 2024, (<)a href='https://scale.com/blog/scale-partners-with-cdao-to-test-and-evaluate-llms'(>)https://scale.com/blog/scale-partners-with-cdao-to-test-and-evaluate-llms(<)/a(>); Brandi Vincent, “Scale AI to Set the Pentagon’s Path for Testing and Evaluating Large Language Models,” (<)em(>)Defense Scoop(<)/em(>), February 20, 2024, (<)a href='https://defensescoop.com/2024/02/20/scale-ai-pentagon-testing-evaluating-large-language-models/'(>)https://defensescoop.com/2024/02/20/scale-ai-pentagon-testing-evaluating-large-language-models(<)/a(>); Chief Digital and Artificial Intelligence Office (CDAO), “Artificial Intelligence Rapid Capabilities Cell,” December 11, 2024, (<)a href='https://www.ai.mil/Portals/137/Documents/Resources%20Page/2024-12-CDAO-Artificial-Intelligence-Rapid-Capabilities-Cell.pdf'(>)https://www.ai.mil/Portals/137/Documents/Resources%20Page/2024-12-CDAO-Artificial-Intelligence-Rapid-Capabilities-Cell.pdf(<)/a(>).
- The use of overgeneralized benchmarks is particularly problematic when AI technologies are implemented in areas with widely differentiated thresholds—what might pass muster for accuracy in a behavioral marketing setting, for example, won’t translate well into a setting with life-or-death stakes, as in healthcare or warfare.75Inioluwa Debora Raji et al., “AI and the Everything in the Whole Wide World Benchmark,” (<)em(>)arXiv(<)/em(>), November 26, 2021, (<)a href='https://arxiv.org/abs/2111.15366'(>)arXiv:2111.15366(<)/a(>).

2. AI-Sized Solutions to
Entrenched Social Problems
Displace Grounded Expertise

The tech industry has long been prone to technosolutionism, insisting that technical expertise is substitutable for other forms of expertise and can offer quicker or more scalable solutions.76The journalist and researcher Meredith Broussard has reframed technosolutionism as (<)em(>)technochauvinism(<)/em(>), or the misplaced belief that technological solutions are superior to other means of addressing problems. See Broussard, (<)em(>)Artificial Unintelligence(<)/em(>) (MIT Press, 2019). But what’s different about the current wave of AI hype is that the industry is now seeking to reframe social problems—down to their root causes—in order to assert AI as the universal fix. This has the effect of undermining the authority of trained professionals who come from these fields.
In education, technosolutionism takes various forms: farming out educational activities from plagiarism detection, to assessment, to grading; implementing technologies that have persistently been shown to lead to worse education outcomes and poorer teaching conditions; and degrading public perception of education and devaluation of educational workers’ labor.77Christopher Newfield, 2016. (<)em(>)The Great Mistake: How We Wrecked Public Universities and How We Can Fix Them(<)/em(>) (Johns Hopkins University Press, 2016); Howard Besser and Maria Bonn, “Impact of Distance Independent Education,” (<)em(>)Journal of the American Society for Information Science(<)/em(>) 47, no. 11 (November 1996): 880–83, (<)a href='https://asistdl.onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-4571(199611)47:11%3C880::AID-ASI14%3E3.0.CO;2-Z'(>)https://asistdl.onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-4571(199611)47:11%3C880::AID-ASI14%3E3.0.CO;2-Z(<)/a(>); Kelli Bird et al., “Big Data on Campus,” (<)em(>)Education Next(<)/em(>) (blog), July 20, 2021, (<)a href='https://www.educationnext.org/big-data-on-campus-putting-predictive-analytics-to-the-test/'(>)https://www.educationnext.org/big-data-on-campus-putting-predictive-analytics-to-the-test(<)/a(>); Neil Selwyn, (<)em(>)Education and Technology: Key Issues and Debates(<)/em(>) (Continuum International Publishing Group, 2011); Neil Selwyn, (<)em(>)Distrusting Educational Technology: Critical Questions for Changing Times(<)/em(>) (Taylor and Francis, 2013); Britt Paris et al., “Sins of Omission: Critical Informatics Perspectives on Higher Education Learning Analytics,” (<)em(>)Journal for the Association for Information Science and Technology(<)/em(>) 59, no. 1 (October 2022): 479–485, (<)a href='https://doi.org/10.1002/asi.24575'(>)https://doi.org/10.1002/asi.24575(<)/a(>). In the effort to minimize costs, university administrators adopting these tools marginalize the professionals who are at the center of the educational endeavor. This pattern can be observed across industries.
Many educators remember the failures from the last technological solution craze, massive open online courses (MOOCs), which promised to democratize access to education by providing free online courses to thousands of students online. Universities downsized their departments and invested in technological and physical infrastructure to make online videos. “Now MOOCs have faded from glory,” writes Tressie McMillan Cottom, “but in most cases, the experts haven’t returned.”78Tressie McMillan Cottom, “The Tech Fantasy That Powers A.I. Is Running on Fumes,” (<)em(>)New York Times(<)/em(>), March 29, 2025, (<)a href='https://www.nytimes.com/2025/03/29/opinion/ai-tech-innovation.html'(>)https://www.nytimes.com/2025/03/29/opinion/ai-tech-innovation.html.(<)/a(>)
Instead, AI initiatives have taken their place in the next cycle of technosolutionism: OpenAI’s ChatGPT Edu initiative provides US universities with access to a suite of tools designed to “bring AI to their campuses at scale.”79OpenAI, “Introducing ChatGPT Edu,” May 30, 2024, (<)a href='https://openai.com/index/introducing-chatgpt-edu/'(>)https://openai.com/index/introducing-chatgpt-edu(<)/a(>). OpenAI recently partnered with the government of Estonia to bring ChatGPT Edu to all high schools in the country.80Dan Fitzpatrick, “ChatGPT to Be Given to All Estonian High School Students,” (<)em(>)Forbes(<)/em(>), February 26, 2025, (<)a href='https://www.forbes.com/sites/danfitzpatrick/2025/02/26/chatgpt-to-be-given-to-all-estonian-high-school-students/'(>)https://www.forbes.com/sites/danfitzpatrick/2025/02/26/chatgpt-to-be-given-to-all-estonian-high-school-students(<)/a(>). The Trump Administration issued an executive order encouraging the adoption of AI into K–12 education through public-private partnerships, with the goal to train students in AI and incorporate AI into teaching-related tasks from training to evaluation.81“Executive Order 14277 of April 23, 2025, Advancing Artificial Intelligence Education for American Youth,” (<)em(>)Code of Federal Regulations(<)/em(>), title 90 (2025): 17519 (<)a href='https://www.federalregister.gov/documents/2025/04/28/2025-07368/advancing-artificial-intelligence-education-for-american-youth'(>)https://www.federalregister.gov/documents/2025/04/28/2025-07368/advancing-artificial-intelligence-education-for-american-youth(<)/a(>). California State University, the largest public four-year university in the US, recently announced a $16 million deal with OpenAI, Google, Microsoft, Nvidia, and others to create an “AI-empowered higher education system.”82California State University, “CSU Announces Landmark Initiative to Become Nation’s First and Largest AI-Empowered University System,” press release, February 4, 2025, (<)a href='https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powered-Initiative.aspx'(>)https://www.calstate.edu/csu-system/news/Pages/CSU-AI-Powered-Initiative.aspx(<)/a(>). In some cases, schools are shifting entirely to an AI-centered learning model: Alpha School markets itself as an “AI-powered private school” where kids can meet with an AI tutor to “crush academics in two hours” and then use the remaining time to “pursue their life passions.”83Alpha School (website), accessed April 8, 2025, (<)a href='https://alpha.school/'(>)https://alpha.school(<)/a(>).
Los Angeles Unified School District, the second largest in the United States, conducted a disastrous experiment with an AI chatbot, “Ed,” from AllHere, a company that quickly imploded after it was revealed that it compromised sensitive student data, including special-education and student-discipline information.84Mark Keierleber, “Whistleblower: LA Schools’ Chatbot Misused Student Data as Tech Co Crumbled,” (<)em(>)The 74 Million(<)/em(>), July 1, 2024, (<)a href='https://www.the74million.org/article/whistleblower-l-a-schools-chatbot-misused-student-data-as-tech-co-crumbled/'(>)https://www.the74million.org/article/whistleblower-l-a-schools-chatbot-misused-student-data-as-tech-co-crumbled(<)/a(>). Sometimes AI is being added quietly in the background, such as through integrations into Canvas (the learning management system used by many university systems).
Schools and school districts end up cutting jobs—often through failure to renew contracts with term employees, which don’t show up in statistical accounting on layoffs—in order to spend millions of dollars on technologies that ultimately fail to deliver on their promises, leaving frustrated students and educators in their wake.
This recent push to integrate large-scale AI systems into schools is particularly nefarious because it threatens to undermine the purpose of education, trading the time-consuming process of learning for “efficiency.” As Professors Sonja Drimmer and Christopher J. Nygren write in How We Are Not Using AI in the Classroom, large-scale AI systems like LLMs are good at pattern recognition and token prediction, but not at learning.85Sonja Drimmer and Christopher J. Nygren, “How We Are Not Using AI in the Classroom, ” (<)em(>)The Newsletter of the International Center of Medieval Art(<)/em(>), no. 1 (Spring 2025): 25–28, (<)a href='https://static1.squarespace.com/static/55577d2fe4b02de6a6ea49cd/t/67dfeb8d9ff3a5472a6d719d/1742728078061/Drimmer_Nygren_Not_Using_AI.pdf'(>)https://static1.squarespace.com/static/55577d2fe4b02de6a6ea49cd/t/67dfeb8d9ff3a5472a6d719d/1742728078061/Drimmer_Nygren_Not_Using_AI.pdf(<)/a(>). Drimmer and Nygren argue that a fundamental disjuncture exists between what LLMs are trained to do—predict what is most likely to come next—and what educators train students to do: find details that diverge from the baseline, imagine alternatives, and foster the capacity to “think well, read well, listen well, and look well.” For Drimmer and Nygren, learning is as much about the process of learning—equal parts lived, institutional, and natural human instinct—as it is about the outcomes. “The power of learning to write is not the written product itself but the process of learning to write. Ultimately, AI short circuits that process and in so doing breaches the entire educational contract.”86Ibid.
The adoption of facial-recognition systems in schools not only heralds the replacement of teachers and counselors with AI-enabled cameras, but also deprives teachers of the agency to decide how to improve safety in their school communities. In 2018, the Lockport City School District in upstate New York procured $4 million in state grant funding and purchased a facial-recognition technology system to use in its schools.87Stefanie Coyle and Rashida Richardson, “Bottom-Up Biometric Regulation: A Community’s Response to Using Face Surveillance in Schools,” in Amba Kak, ed., (<)em(>)Regulating Biometrics: Global Approaches and Urgent Questions(<)/em(>), AI Now Institute, September 2020, (<)a href='https://ainowinstitute.org/wp-content/uploads/2023/09/regulatingbiometrics_Bottom-Up-Biometric-Regulation.pdf'(>)https://ainowinstitute.org/wp-content/uploads/2023/09/regulatingbiometrics_Bottom-Up-Biometric-Regulation.pdf(<)/a(>). Despite the state grant program’s requirement to engage parents, teachers, students, and the school community, the decision to purchase the system was taken at a sparsely attended school board meeting in late summer.88Letter from Stefanie Coyle to NYSED Commissioner MaryEllen Elia, September 26, 2018. The teachers’ union president would eventually say they were not even consulted on the decision.89Connor Hoffman, “Citizens Petition LCSD to Postpone Security Project,” (<)em(>)Lockport Journal(<)/em(>), March 20, 2018, (<)a href='https://www.lockportjournal.com/news/local_news/citizens-petition-lcsd-to-postpone-security-project/article_e8d547e1-0e3a-5a2e-81c3-5879422548c2.html'(>)https://www.lockportjournal.com/news/local_news/citizens-petition-lcsd-to-postpone-security-project/article_e8d547e1-0e3a-5a2e-81c3-5879422548c2.html(<)/a(>).
Allegedly purchased to “prevent school shootings,” Lockport’s system consisted of a “red list” of individuals barred from the school campuses, including local registered sex offenders and suspended students.90Davey Alba, “Facial Recognition Moves Into a New Front: Schools”, (<)em(>)New York Times(<)/em(>), February 6, 2020, (<)a href='https://www.nytimes.com/2020/02/06/business/facial-recognition-schools.html'(>)https://www.nytimes.com/2020/02/06/business/facial-recognition-schools.html(<)/a(>). Instead of using teachers and counselors to help students in crisis, the district turned to technology that lacked the ability to prevent, or even detect, a school shooting.91Lockport’s weapons-detection system misidentified broom handles as guns so frequently that it had to be disabled. See Todd Feathers, “Facial Recognition Company Lied to School District About Its Racist Tech,” (<)em(>)Vice(<)/em(>), December 1, 2020, (<)a href='https://www.vice.com/en/article/fac-recognition-company-lied-to-school-district-about-its-racist-tech/'(>)https://www.vice.com/en/article/fac-recognition-company-lied-to-school-district-about-its-racist-tech(<)/a(>). Community organizing, litigation, and legislative advocacy was ultimately able to defeat Lockport’s system—but, absent federal regulation, other school districts have persisted in obtaining facial-recognition systems.92Carolyn Thompson, “New York Bans Facial Recognition in Schools After Report Finds Risks Outweigh Potential Benefits,” Associated Press, September 27, 2023, (<)a href='https://apnews.com/article/facial-recognition-banned-new-york-schools-ddd35e004254d316beabf70453b1a6a2'(>)https://apnews.com/article/facial-recognition-banned-new-york-schools-ddd35e004254d316beabf70453b1a6a2(<)/a(>).
In addition to hard security, schools have also turned to online surveillance systems, a practice that started during the pandemic. One system in particular, GoGuardian, infiltrated large school districts across the US, including New York City Public Schools.93Simon McCormack and Stefanie Coyle, “This Software Could Be Spying on NYC Students,” NYCLU Commentary, (<)a href='https://www.nyclu.org/commentary/software-could-be-spying-nyc-students'(>)https://www.nyclu.org/commentary/software-could-be-spying-nyc-students(<)/a(>). Under the guise of preventing student self-harm, GoGuardian allowed teachers and administrators unfettered access to student devices, with the ability to view Google searches, remotely activate webcams, perform web filtering, and close tabs.94Nader Issa, “CPS Teachers Could Look Inside Students’ Homes — Without Their Knowledge — Before Fix,” (<)em(>)Chicago Sun Times(<)/em(>), October 5, 2020, (<)a href='https://chicago.suntimes.com/education/2020/10/5/21497946/cps-public-schools-go-guardian-technology-privacy-remote-learning'(>)https://chicago.suntimes.com/education/2020/10/5/21497946/cps-public-schools-go-guardian-technology-privacy-remote-learning(<)/a(>). GoGuardian claims it has thwarted more than 18,000 attempts at self-harm, citing its own system as the source for this statistic.95“Since 2020, Beacon has prevented an estimated 18,623 students from physical harm.” GoGuardian (website), accessed May 17, 2025, (<)a href='https://www.goguardian.com/#footnotes'(>)https://www.goguardian.com/#footnotes(<)/a(>).
Those with clear on-the-ground expertise are rarely included in decision-making about where and under what conditions AI is deployed. For example, educators Martha Fay Burtis and Jesse Stommel detail a technology adoption process at their regional public university. The educators were invited to participate in meetings with a new technology vendor, EAB (the former Education Advisory Board), believing their role was to offer insight and expertise around tech use in the classroom and beyond. Instead, they discovered that “the choice about adopting this platform had already been made, and there was little opportunity to engage meaningfully with EAB’s representatives about the misalignments we observed.”96Jesse Stommel and Martha Burtis, “Bad Data Are Not Better than No Data,” AAUP, (<)em(>)The Higher Ed Data Juggernaut(<)/em(>), Winter 2024, (<)a href='https://www.aaup.org/article/bad-data-are-not-better-no-data'(>)https://www.aaup.org/article/bad-data-are-not-better-no-data(<)/a(>). In most cases in higher ed, as well as in K–12, administrators negotiate most corporate educational technology vendor contracts without any involvement from faculty members, students, or parents and little, if any, accountability to those core constituents. Britt Paris and colleagues found that across higher education, university administrators adopt unproven and untested corporate educational technologies for vast sums of money to supplant existing technologies run by university technology services. Those corporate technologies were once touted as data-driven technologies, and now increasingly incorporate LLMs and AI into their technological infrastructure, as with Canvas’s Khanmigo, which uses GPT-4.97See Britt Paris et al., “Sins of Omission”; Catherine McGowan et al., “Educational Technology and the Entrenchment of ‘Business as Usual’,” American Association of University Professors (AAUP), (<)em(>)Academe Magazine(<)/em(>), (<)a href='http://www.aaup.org/article/educational-technology-and-entrenchment-%E2%80%9Cbusiness-usual%E2%80%9D'(>)www.aaup.org/article/educational-technology-and-entrenchment-%E2%80%9Cbusiness-usual%E2%80%9D(<)/a(>); and Britt Paris et al., “Platforms Like Canvas Play Fast and Loose With Students’ Data,” (<)em(>)Nation(<)/em(>), April 22, 2021, (<)a href='https://www.thenation.com/article/society/canvas-surveillance/'(>)https://www.thenation.com/article/society/canvas-surveillance(<)/a(>).
Outside of education, labor shortages are used as a justification for AI deployment. But many so-called “labor shortages” are the product of poor working conditions, inadequate pay, and institutional failures—none of which will be solved by AI. When it comes to nursing, for example, hospitals have trouble staffing nurses not because of a nursing labor shortage, but because of the failure of hospital boards and administrators to implement critical health and safety protections. National Nurses United has stated clearly that there is no shortage of nursing professionals in the field. “Simply put,” they write, “there is a failure by hospital industry executives to put nurses and the patients they care for above corporate profits.”98National Nurses United, to Interested Parties, March 30, 2023, (<)a href='https://www.nationalnursesunited.org/sites/default/files/nnu/documents/Reporter_Memo_Hospital_Staffing_Crisis.pdf'(>)https://www.nationalnursesunited.org/sites/default/files/nnu/documents/Reporter_Memo_Hospital_Staffing_Crisis.pdf(<)/a(>). Large technology investments designed to replace nurses do nothing to meaningfully address the root causes of problems—such as federal minimum standards to support strong nurse-to-patient staffing ratios or investments in adequate personal protective equipment (PPE)—or disrupt the underlying profit motives.
Nevertheless, even as hospital workers grapple with unsustainable working conditions, hospitals continue to invest money and resources into AI technologies. In 2023, the Permanente Medical Group, a division of US healthcare giant Kaiser Permanente, signed a large-scale partnership agreement with Nabla, an AI transcription company, at the same time that Kaiser faced the largest healthcare worker strike in US history due to understaffing, burnout, and low wages.99Ingrid Lunden, “As Its Workers Strike, Kaiser Permanente Strikes a Deal for Physicians to Use an AI Copilot from Nabla,” (<)em(>)TechCrunch(<)/em(>), October 5, 2023, (<)a href='https://techcrunch.com/2023/10/05/as-its-workers-strike-over-burnout-and-low-wages-kaiser-permanente-strikes-a-deal-to-use-an-ai-copilot-from-nabla/'(>)https://techcrunch.com/2023/10/05/as-its-workers-strike-over-burnout-and-low-wages-kaiser-permanente-strikes-a-deal-to-use-an-ai-copilot-from-nabla(<)/a(>).
There are parallels here to the labor crisis facing the agricultural industry, which is widely viewed as one of the most dangerous lines of work because of high rates of injury, pesticide exposure, high heat conditions, and lack of sun protection.100National Farm Worker Ministry, “Issues Affecting Farm Workers,” (<)a href='https://nfwm.org/farm-workers/farm-worker-issues/'(>)https://nfwm.org/farm-workers/farm-worker-issues(<)/a(>). It should come as no surprise that workers are not eager to join an industry with a legacy of racism,101Juan F. Perea, “The Echoes of Slavery: Recognizing the Racist Origins of the Agricultural and Domestic Worker Exclusion from the National Labor Relations Act,”(<)em(>) Ohio State Law Journal(<)/em(>) 72, no. 1 (2011): 95–138, (<)a href='http://doi.org/10.2139/ssrn.1646496'(>)http://doi.org/10.2139/ssrn.1646496(<)/a(>). long hours of back-breaking labor, and criminally low wages. Farmworkers have some of the lowest annual family incomes in the United States and are categorically excluded from the National Labor Relations Act of 1935, which provides workers with critical labor protections like the right to organize.102See Perea, “The Echoes of Slavery,” 96; and National Farm Worker Ministry, “Issues Affecting Farm Workers,” 2022, (<)a href='https://nfwm.org/wp-content/uploads/2022/12/Farm-Worker-Issues-Two-Pager-.pdf'(>)https://nfwm.org/wp-content/uploads/2022/12/Farm-Worker-Issues-Two-Pager-.pdf(<)/a(>). State-level regulations to protect farmworker organizing are also being undone by the Supreme Court.103(<)em(>)Cedar Point Nursery, et al. v. Hassid(<)/em(>), 594 U.S. 139 (2021). On top of this, many farmworkers are affected by immigration status, and fear deportation if they raise any concerns about their negative working conditions. Yet, agritech startups are targeting small farms with AI-powered robotics that will purportedly resolve labor shortages.104Shanelle Kaul, “How AI Powered Robots Are Helping Small Farms Fight Labor Shortages,” CBS News, March 28, 2024, (<)a href='https://www.cbsnews.com/video/how-ai-powered-robots-are-helping-small-farms-fight-labor-shortages'(>)https://www.cbsnews.com/video/how-ai-powered-robots-are-helping-small-farms-fight-labor-shortages(<)/a(>). Stout, the producer of a smart cultivator used in fields across the country, is marketed as a solution to reduce the reliance on “costly and scarce manual labor.”105The WG Center for Innovation & Technology, (<)em(>)Western Growers Case Study(<)/em(>), November 2024, (<)a href='https://wga.s3.us-west-1.amazonaws.com/cit/2024/cit_case-study-stout.pdf'(>)https://wga.s3.us-west-1.amazonaws.com/cit/2024/cit_case-study-stout.pdf(<)/a(>).
Quick-Fix Solutionism Undermines
Structural Interventions

In the rush to implement quick-fix AI solutions, administrators are displacing the long-term structural interventions needed to improve educational outcomes. This was perhaps most clear in the education sector during the COVID-19 pandemic, when school districts rushed to adopt faulty computer-monitoring software rife with potential privacy concerns rather than address students’ needs. As educators Martha Burtis and Jesse Stommel write: “Frankly, it’s insulting when institutions throw money at corporate edtech when so many of their most marginalized students are struggling, faculty/staff have been furloughed, public funding has been decimated, and the work of teaching has been made altogether precarious.”106Jesse Stommel and Martha Burtis, “Counter-Friction to Stop the Machine: The Endgame for Instructional Design,” (<)em(>)Hybrid Pedagogy(<)/em(>), April 27, 2021, h(<)a href='http://hybridpedagogy.org/the-endgame-for-instructional-design'(>)ttps://hybridpedagogy.org/the-endgame-for-instructional-design(<)/a(>). The American Association of University Professors, the union for professors, is developing a labor strategy for higher education workers addressing how AI is affecting their workplaces.107AI Infiltration into Higher Education: AAUP Survey Findings and Strategies. American Association of University Professors Ad hoc Committee on AI and the Profession. link forthcoming.
In the educational sector, the jump to implementing technological solutions serves to divert funding and attention away from the kinds of investments that are most meaningful to students, like smaller classroom sizes and good facilities. Simultaneously, it functions to fuel the conditions that further justify the dismantling of public education, in the name of “AI teachers.”108Greg Toppo, “Was Los Angeles Schools’ $6 Million AI Venture a Disaster Waiting to Happen?” (<)em(>)The 74 Million(<)/em(>), July 9, 2024, (<)a href='https://www.the74million.org/article/was-los-angeles-schools-6-million-ai-venture-a-disaster-waiting-to-happen/'(>)https://www.the74million.org/article/was-los-angeles-schools-6-million-ai-venture-a-disaster-waiting-to-happen(<)/a(>).
The same can be said for the legal system, where funding is diverted away from meaningful measures to provide affordable and accessible legal services and into AI tool development designed to “hack” lawyering.109Nora Freeman Engstrom and David Freeman Engstrom, “The Making of the A2J Crisis,” (<)em(>)Stanford Law Review Online(<)/em(>) 75, no. 146 (May 2024), (<)a href='https://ssrn.com/abstract=4817329'(>)https://ssrn.com/abstract=4817329(<)/a(>). Andy J. Semotiuk writes about how immigration attorneys’ reliance on AI tools without proper vetting “introduces a dangerous vulnerability, potentially exposing immigrants to the risk of erroneous legal advice, unjust outcomes, and exploitation.”110Andy J. Semotiuk, “How AI Is Impacting Immigration Cases and What to Expect,” (<)em(>)Forbes(<)/em(>), March 23, 2024, (<)a href='https://www.forbes.com/sites/andyjsemotiuk/2024/03/23/how-ai-is-impacting-immigration-cases-and-what-to-expect/'(>)https://www.forbes.com/sites/andyjsemotiuk/2024/03/23/how-ai-is-impacting-immigration-cases-and-what-to-expect(<)/a(>). These effects are not just individual, either. “The consequences of such negligence or malpractice can extend far beyond individual cases,” Semotiuk continues, “impacting entire communities and perpetuating systemic injustices if digital inaccuracy distorts the legal domain.”
Epic’s Sepsis Model Failure:
A Case Study in Technosolutionism

In 2017, Epic, the leading technology company for electronic medical records, released an AI tool designed to predict sepsis, a deadly condition that develops in response to infection. The algorithm was designed to predict which patients were at risk for developing sepsis so that healthcare professionals could act quickly to prevent onset. Epic advertised that its algorithm was accurate 80 percent of the time. Without verifying this claim—and without any regulatory oversight or approval—hundreds of hospitals implemented the algorithm.111Casey Ross, “A Popular Algorithm to Predict Sepsis Misses Most Cases and Sends Frequent False Alarms, Study Finds,” (<)em(>)Stat(<)/em(>), June 21, 2021, (<)a href='https://www.statnews.com/2021/06/21/epic-sepsis-prediction-tool/'(>)https://www.statnews.com/2021/06/21/epic-sepsis-prediction-tool(<)/a(>). (The model was included as part of Epic’s “honor roll” incentive program, which provides hundreds of thousands of dollars to hospitals that implement Epic’s technology.)112Casey Ross, “Epic’s AI Algorithms, Shielded from Scrutiny by a Corporate Firewall, Are Delivering Inaccurate Information on Seriously Ill Patients,” (<)em(>)Stat(<)/em(>), July 26, 2021, (<)a href='https://www.statnews.com/2021/07/26/epic-hospital-algorithms-sepsis-investigation/'(>)https://www.statnews.com/2021/07/26/epic-hospital-algorithms-sepsis-investigation(<)/a(>).
Yet when researchers at the University of Michigan evaluated Epic’s model in Michigan’s healthcare system years later, they found that the model was accurate only 63 percent of the time. The model also routinely identified false alarms, drawing doctors away from patients with other, high-risk medical conditions.113Andrew Wong et al., “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients,” (<)em(>)JAMA Internal Medicine (<)/em(>)181, no. 8 (2021): 1065–1070, (<)a href='https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2781307'(>)https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2781307(<)/a(>). Washington University reported similar accuracy rates.114Patrick G. Lyons et al., “How Useful Is the Epic Sepsis Prediction Model for Predicting Sepsis?” (<)em(>)American Journal of Respiratory and Critical Care Medicine(<)/em(>) (May 2021), (<)a href='https://doi.org/10.1164/ajrccm-conference.2021.203.1_MeetingAbstracts.A1580'(>)https://doi.org/10.1164/ajrccm-conference.2021.203.1_MeetingAbstracts.A1580(<)/a(>). Epic disputed the research and findings.115Ross, “Epic’s AI Algorithms, Shielded From Scrutiny by a Corporate Firewall, Are Delivering Inaccurate Information on Seriously Ill Patients.”/ But after lengthy investigations from STAT, a leading medical publication, into Epic’s models—including Epic’s lack of model transparency—Epic rereleased its model, overhauling the model’s data variables and definitions and providing lengthy updated guidance on implementation.116Ibid.

3. AI Solutionism Obscures Systemic
Issues Facing Our Economy—
Often Acting as a Conduit for
Deploying Austerity Mandates
by Another Name

Whether or not AI technologies live up to the hype may not even matter to the actors most invested in their rollout: AI solutionism is used to obscure systemic issues facing our economy, acting as a justification for austerity by another name. A recent report by TechTonic Justice provides a comprehensive illustration of how AI is used to restrict the opportunities for low-income people, exposing them to AI decision-making that denies them access to resources and shapes their life chances, from Medicaid coverage denials, to how much renting an apartment will cost, to determinations about family separation by child welfare agencies.117“About Techtonic Justice,” Techtonic Justice (website), accessed April 25, 2025, (<)a href='https://www.techtonicjustice.org/about'(>)https://www.techtonicjustice.org/about(<)/a(>). The report states:
“The alleged rationality and objectivity of the system allow the users of AI to justify harmful actions, like benefit cuts or law enforcement harassment of students, or reinforce status quo power imbalances, such as that between employers and employees. Revealingly, the AI systems applied to low-income people almost never improve access to benefits or other opportunities.”118Kevin De Liban, (<)em(>)Inescapable AI: The Ways AI Decides How Low-Income People Work, Live,Learn, and Survive(<)/em(>), Techtonic Justice, November 2024, (<)a href='https://static1.squarespace.com/static/65a1d3be4690143890f61cec/t/673c7170a0d09777066c6e50/1732014450563/ttj-inescapable-ai.pdf'(>)https://static1.squarespace.com/static/65a1d3be4690143890f61cec/t/673c7170a0d09777066c6e50/1732014450563/ttj-inescapable-ai.pdf(<)/a(>).
Numerous historical examples reveal where this playbook leads: the widespread disenfranchisement of the public and costly litigation for the government agencies deploying AI services. Importantly, AI tools have not improved in their capabilities; if anything, the flaws in foundation models make them even worse:
- Over a decade ago, Michigan spent $47 million to develop an automated fraud-detection system intended to identify benefit fraud in the state.119Bryce Covert, “States Are Turning Their Public Benefits Systems Over to AI. The Results Have Often Led to ‘Immense Suffering’,” (<)em(>)Fast Company(<)/em(>), January 23, 2025, (<)a href='https://www.fastcompany.com/91265363/states-are-turning-their-public-benefits-systems-over-to-ai-the-results-have-often-led-to-immense-suffering'(>)https://www.fastcompany.com/91265363/states-are-turning-their-public-benefits-systems-over-to-ai-the-results-have-often-led-to-immense-suffering(<)/a(>). After accusing over sixty thousand residents of fraud, it was revealed that 70 percent of the system’s determinations were incorrect. Similar errors concerning Medicaid eligibility occurred in Indiana, Arkansas, Idaho, and Oregon.120Michele Gilman, “AI Algorithms Intended to Root Out Welfare Fraud Often End Up Punishing the Poor Instead,” (<)em(>)The Conversation(<)/em(>), February 14, 2020, (<)a href='https://theconversation.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-131625'(>)https://theconversation.com/ai-algorithms-intended-to-root-out-welfare-fraud-often-end-up-punishing-the-poor-instead-131625(<)/a(>).
- In 2013, the Dutch government implemented an algorithmic system to identify potentially fraudulent claims for child care benefits. The system considered an applicant’s nationality as one of its risk factors, and non-Dutch nationals were often assigned higher risk scores compared to Dutch nationals. Years of reliance on this algorithm resulted in false determinations of fraud for tens of thousands of people, many of whom were low-income and subjected to rescinded benefits or harsh penalties.121Amnesty International, “Dutch Childcare Benefit Scandal an Urgent Wake-Up Call to Ban Racist Algorithms,” October 25, 2021, (<)a href='https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/'(>)https://www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal(<)/a(>).
- Since 2016, Allegheny County in Pennsylvania relies on a predictive risk model to supplement the abilities of its human intake screeners to identify possible instances of child abuse and neglect. The model computes risk scores using indicators like call frequency and data obtained from public agencies. Consequently, children in families that are more likely to be surveilled by mandated reporters or community members—intake workers receive calls about black or biracial families three and a half times more frequently compared to white families—or those that simply rely on public programs are likely to receive inaccurate scores.122Virginia Eubanks, “A Child Abuse Prediction Model Fails Poor Families,” (<)em(>)Wired(<)/em(>), January 15, 2018, (<)a href='https://www.wired.com/story/excerpt-from-automating-inequality/'(>)https://www.wired.com/story/excerpt-from-automating-inequality(<)/a(>).
Unfortunately, we know of the examples above because individuals on the receiving end of an AI system were able to determine that they were wronged, and used the legal system—in combination with public pressure—to push for accountability. But more often we don’t know how AI is being used.
AI is often deployed privately and surreptitiously, such as through the use of so-called “social scoring” systems that draw together disparate data sources to make determinations about access to resources, leaving those impacted by these systems without the necessary information to know and understand their effects. For example, an investigative report conducted by The Markup and The New York Times into hundreds of federal lawsuits filed against companies deploying tenant-screening algorithms found that reports often contain glaring errors, resulting in qualified individuals being denied housing without ever knowing an incorrect background report was responsible—let alone having the opportunity to correct it.123Lauren Kitchner and Matthew Goldstein, “Access Denied: Faulty Automated Background Checks Freeze Out Renters,” (<)em(>)Markup(<)/em(>), May 28, 2020, (<)a href='https://themarkup.org/locked-out/2020/05/28/access-denied-faulty-automated-background-checks-freeze-out-renter'(>)https://themarkup.org/locked-out/2020/05/28/access-denied-faulty-automated-background-checks-freeze-out-renter(<)/a(>).
Or in other places, firms leverage the information asymmetries afforded by their dominant position in a market against consumers. The Department of Justice—joined by eight different states—filed a lawsuit in 2024 against RealPage, a private-equity-owned software company that collects confidential information from landlords about rents and occupancy rates, and then uses an algorithm to suggest inflated rental rates to landlords on their platform. The lawsuit uncovered that over three million rental units use this pricing technology, which RealPage advertises to landlords as a way to increase prices above market rate.124Danielle Kaye, Lauren Hirsch, and David McCabe, “U.S. Accuses Software Maker Realpage of Enabling Collusion on Rents,” (<)em(>)New York Times(<)/em(>), August 23, 2024, (<)a href='https://www.nytimes.com/2024/08/23/business/economy/realpage-doj-antitrust-suit-rent.html'(>)https://www.nytimes.com/2024/08/23/business/economy/realpage-doj-antitrust-suit-rent.html(<)/a(>). The absence of affordable housing is an entrenched public-policy problem that exists outside of AI. But the use of AI both exacerbates the underlying problem—by making it easy for landlords to wrongly deny access to housing to potential tenants, and by keeping housing off the market in the hope of extracting higher rents—and makes the problem more obscure to the public.
DOGE’s Power Grab

Nowhere is AI solutionism weaponized more than in the Department of Government Efficiency’s wholesale attack on the administrative state under the guise of government “efficiency.”125Brian Chen, “Dispelling Myths of AI and Efficiency,” (<)em(>)Data & Society(<)/em(>), March 25, 2025, (<)a href='https://datasociety.net/library/dispelling-myths-of-ai-and-efficiency/'(>)https://datasociety.net/library/dispelling-myths-of-ai-and-efficiency(<)/a(>); Makena Kelly, “Elon Musk Ally Tells Staff ‘AI-First’ Is the Future of Key Government Agency,” (<)em(>)Wired(<)/em(>), February 3, 2025, (<)a href='https://www.wired.com/story/elon-musk-lieutenant-gsa-ai-agency/'(>)https://www.wired.com/story/elon-musk-lieutenant-gsa-ai-agency(<)/a(>). AI is a central part of DOGE’s stated efforts to advance a broader austerity agenda by “modernizing” federal technology to “maximize government efficiency.” For example, DOGE officials have claimed that “AI” can be used to identify budget cuts, detect fraud and abuse,126Kate Conger, Ryan Mac, and Madeleine Ngo, “Must Allies Discuss Deploying A.I. to Find Budget Savings,” (<)em(>)New York Times(<)/em(>), February 3, 2025, (<)a href='https://www.nytimes.com/2025/02/03/technology/musk-allies-ai-government.html'(>)https://www.nytimes.com/2025/02/03/technology/musk-allies-ai-government.html(<)/a(>). automate government tasks,127Makena Kelly and Zoe Schiffer, “DOGE Has Deployed Its GSAi Custom Chatbot for 1,500 Federal Workers,” (<)em(>)Wired(<)/em(>), March 7, 2025, (<)a href='https://www.wired.com/story/gsai-chatbot-1500-federal-workers/'(>)https://www.wired.com/story/gsai-chatbot-1500-federal-workers(<)/a(>). and determine whether someone’s job is “mission critical”128Courtney Kube et al., “DOGE Will Use AI to Assess the Responses of Federal Workers Who Were Told to Justify Their Jobs via Email,” NBC News, February 25, 2025, (<)a href='https://www.nbcnews.com/politics/doge/federal-workers-agencies-push-back-elon-musks-email-ultimatum-rcna193439'(>)https://www.nbcnews.com/politics/doge/federal-workers-agencies-push-back-elon-musks-email-ultimatum-rcna193439(<)/a(>).—despite any tangible evidence that AI is capable of doing these tasks effectively, or at all.129Asmelash Teka Hadgu and Timnit Gebru, “Replacing Federal Workers with Chatbots Would Be a Dystopian Nightmare,” (<)em(>)Scientific American(<)/em(>), April 14, 2025, (<)a href='https://www.scientificamerican.com/article/replacing-federal-workers-with-chatbots-would-be-a-dystopian-nightmare'(>)https://www.scientificamerican.com/article/replacing-federal-workers-with-chatbots-would-be-a-dystopian-nightmare(<)/a(>). DOGE’s language around AI also perpetuates the false idea that “AI” is a coherent set of technologies able to meet complex social goals.130Emily M. Bender, “Calling All Mad Scientists: Reject ‘AI’ as a Framing of Your Work,” (<)em(>)Mystery AI Hype Theater 3000: The Newsletter(<)/em(>), April 8, 2025, (<)a href='https://buttondown.com/maiht3k/archive/calling-all-mad-scientists-reject-ai-as-a-framing/'(>)https://buttondown.com/maiht3k/archive/calling-all-mad-scientists-reject-ai-as-a-framing(<)/a(>).
Whether DOGE is actually using AI, or whether AI can even do these functions at any level of acceptable technical performance, is beside the point. The invocation of AI and its many “innovations” has enabled DOGE to effectuate a wholesale data and power grab, gaining unrestricted, “god-tier” access to data across federal agencies, centralizing government contracting databases, and bulldozing government technical infrastructure to make them legible to AI systems and the tech companies that control them.
DOGE claims that it is making government more advanced and efficient, but there is incontrovertible evidence that DOGE is making the federal government decidedly less efficient and less technologically adept. DOGE has put tens of thousands of expert civil servants out of their jobs and rendered the democratic process itself “waste” to be eliminated. It has shut down the office in the Social Security administration whose job it was to digitize signatures and promote cybersecurity enhancements,131Social Security Administration, “Social Security Eliminates Wasteful Department,” press release, February 24, 2025, (<)a href='https://www.ssa.gov/news/press/releases/2025/#2025-02-24'(>)https://www.ssa.gov/news/press/releases/2025/#2025-02-24(<)/a(>); Natalie Alms, “Social Security Shutters its Civil Rights and Transformation Offices,” (<)em(>)Government Executive(<)/em(>), February 26, 2025, (<)a href='https://www.govexec.com/management/2025/02/social-security-shutters-its-civil-rights-and-transformation-offices/403310/'(>)https://www.govexec.com/management/2025/02/social-security-shutters-its-civil-rights-and-transformation-offices/4033(<)/a(>). and a team of technologists within the General Services Administration whose job it was to maintain critical government digital services like Login.gov.132Ed O’Keefe and Rhona Tarrant, “General Services Administration Shutters its Technology Unit,” CBS, March 2, 2025, (<)a href='https://www.cbsnews.com/news/general-services-administration-shutters-technology-unit-trump-doge/'(>)https://www.cbsnews.com/news/general-services-administration-shutters-technology-unit-trump-doge(<)/a(>). DOGE has ambitions to cut the Agency for Healthcare Research and Quality (AHRQ), a research center designed to find efficiencies in healthcare quality research.133John Wilkerson, “HHS Agency Responsible for Health Care Quality Research Threatened with Mass Layoffs,” (<)em(>)Stat(<)/em(>), March 20, 2025, (<)a href='https://www.statnews.com/2025/03/20/hhs-ahrq-agency-responsible-for-health-care-quality-research-threatened-with-mass-layoffs/'(>)https://www.statnews.com/2025/03/20/hhs-ahrq-agency-responsible-for-health-care-quality-research-threatened-with-mass-layoffs(<)/a(>); Ezra Klein, interview with Santi Ruiz, The Ezra Klein Show, podcast audio, 16:00, March 25, 2025, (<)a href='https://www.nytimes.com/2025/03/25/opinion/ezra-klein-podcast-santi-ruiz.html'(>)https://www.nytimes.com/2025/03/25/opinion/ezra-klein-podcast-santi-ruiz.html(<)/a(>). The Department has sowed mass confusion and chaos, issuing unclear directives via email and costing the government tens of millions of dollars in wasted work hours.134onathan Allen, “‘Absolute Chaos’: DOGE Sows Turmoil in Its Quest for ‘Efficiency’,” NBC News, February 25, 2025, (<)a href='https://www.nbcnews.com/politics/doge/absolute-chaos-doge-turmoil-efficiency-rcna193579'(>)https://www.nbcnews.com/politics/doge/absolute-chaos-doge-turmoil-efficiency-rcna193579(<)/a(>). In some cases, DOGE has had to rehire workers it previously fired.135Allen, “‘Absolute Chaos’.”
But while DOGE sets its sights on cutting critical agencies with negligible budgets (for example, the AHRQ makes up 0.2 percent of the government’s healthcare spending), it has failed to turn its AI sword on private tech companies—including Elon Musk’s suite of technology companies—that have received billions of dollars in federal contracts and counting.136Eric Lipton, “Musk is Positioned to Profit Off Billions in New Government Contracts,” (<)em(>)New York Times(<)/em(>), March 23, 2025, (<)a href='https://www.nytimes.com/2025/03/23/us/politics/spacex-contracts-musk-doge-trump.html'(>)https://www.nytimes.com/2025/03/23/us/politics/spacex-contracts-musk-doge-trump.html(<)/a(>); Chris Hayes,“Elon Musk Called Out for Raking in $8 Million a Day from Taxpayers,” MSNBC, February 12, 2025, (<)a href='https://www.msnbc.com/all-in/watch/elon-musk-called-out-for-raking-in-8-million-a-day-from-taxpayers-231824965528'(>)https://www.msnbc.com/all-in/watch/elon-musk-called-out-for-raking-in-8-million-a-day-from-taxpayers-231824965528(<)/a(>); Department of Defense, “Department of Defense Awards $14.3 Million to Expand Sources of Solid Rocket Motors,” press release, January 7, 2025, (<)a href='https://www.defense.gov/News/Releases/Release/Article/4022917/department-of-defense-awards-143-million-to-expand-sources-of-solid-rocket-moto/'(>)https://www.defense.gov/News/Releases/Release/Article/4022917/department-of-defense-awards-143-million-to-expand-sources-of-solid-rocket-moto(<)/a(>); Miles Jamison, “Anduril Awarded $99M Air Force Contract for Thunderdome Project,” GovCon Wire, February 18, 2025, (<)a href='https://www.govconwire.com/2025/02/anduril-99-million-air-force-contract-thunderdome-project/'(>)https://www.govconwire.com/2025/02/anduril-99-million-air-force-contract-thunderdome-project(<)/a(>); “Anduril Takes Over Microsoft’s $22 Billion US Army Headset Program,” Reuters, February 11, 2025, (<)a href='https://www.reuters.com/technology/anduril-takes-over-microsofts-22-billion-us-army-headset-program-2025-02-11/'(>)https://www.reuters.com/technology/anduril-takes-over-microsofts-22-billion-us-army-headset-program-2025-02-11(<)/a(>); MediaJustice, “WTF: The Rise of the Broligarchy PoliEd Series-Episode 2: WTF is the Tech Broligarchy Up To?” March 6, 2025, 54 min., 42 sec., (<)a href='https://www.youtube.com/watch?v=sKhnswwYe60&t=3017s'(>)https://www.youtube.com/watch?v=sKhnswwYe60&t=3017s(<)/a(>). The organization also claims it is reducing fraud, yet a substantial focus of its cuts has targeted agencies and departments wholly unrelated to fraud, such as canceling government leases for the U.S. Fish and Wildlife Services in Colorado, Montana, and North Dakota.137Nick Mordowanec, “Map Shows DOGE Office Closures in March,” (<)em(>)Newsweek(<)/em(>), March 26, 2025, (<)a href='https://www.newsweek.com/doge-elon-musk-map-budget-cuts-march-2050761'(>)https://www.newsweek.com/doge-elon-musk-map-budget-cuts-march-2050761(<)/a(>).
DOGE is riddled with these inconsistencies and hypocrisies. But debating the merits of DOGE’s efforts to improve government efficiency requires acceptance of the premise that “efficiency” is DOGE’s goal. It is not. DOGE is a power grab by process, with AI solutionism operating as a smoke screen to consolidate executive power and reshape the federal government to fit the ideological agenda of the Trump Administration and its backers—some of whom own the tech companies that stand to benefit the most from both federal adoption of AI and the turn to austerity.
The millions of Americans who rely on the Federal government to access critical and lifesaving services will feel the effects of DOGE’s brazen power grab most immediately. The havoc DOGE is wreaking—whether through freezing funding, flagging individuals for fraudulent activity, or eliminating staffing necessary to effectively deliver services—will inevitably lead to services being denied, delayed, or inappropriately revoked. DOGE’s 12 percent cut of the Social Security Administration (SSA) workforce has already led to reports of website crashes, hold times of more than four hours, and delayed application timelines.138Zeeshan Aleem, “DOGE Is Already Breaking Social Security,” MSNBC, March 26, 2025, (<)a href='https://www.msnbc.com/opinion/msnbc-opinion/social-security-elon-musk-doge-cuts-issues-rcna198034'(>)https://www.msnbc.com/opinion/msnbc-opinion/social-security-elon-musk-doge-cuts-issues-rcna198034(<)/a(>). Current reports that DOGE is looking to migrate SSA’s computer systems in a matter of months is all but certain to lead to massive disruptions to over sixty-five million people receiving Social Security benefits.139Makena Kelly, “DOGE Plans to Rebuild SSA Code Base in Months, Risking Benefits and System Collapse,” (<)em(>)Wired(<)/em(>), March 28, 2025, (<)a href='https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits/'(>)https://www.wired.com/story/doge-rebuild-social-security-administration-cobol-benefits(<)/a(>).
Tens of thousands of civil servants have already lost their jobs, pushed out of agencies ranging from Veteran Affairs, Health and Human Services, and the Consumer Financial Protection Bureau. Others have been told their jobs could easily be replaced by AI. But they can’t be. AI is not designed to administer the federal government, nor is it capable of doing so.140Brian J. Chen, “Dispelling Myths of AI and Efficiency,” (<)em(>)Data & Society(<)/em(>), March 25, 2025, (<)a href='https://datasociety.net/library/dispelling-myths-of-ai-and-efficiency/'(>)https://datasociety.net/library/dispelling-myths-of-ai-and-efficiency(<)/a(>). But when the AI systems introduced inevitably fail, DOGE has created the perfect vacuum for private companies to swoop in and clean up the mess. Writes Eryk Salvaggio: “By shifting government decisions to AI systems they must know are unsuitable, these tech elites avoid a political debate they would probably lose. Instead, they create a nationwide IT crisis that they alone can fix.”141Eryk Salvaggio, “Anatomy of an AI Coup,” (<)em(>)Tech Policy Press(<)/em(>), February 9, 2025, (<)a href='https://www.techpolicy.press/anatomy-of-an-ai-coup/'(>)https://www.techpolicy.press/anatomy-of-an-ai-coup(<)/a(>). Workday CEO Carl Eschenbach has already called DOGE a “tremendous opportunity” to integrate his company’s portfolio of cloud and AI products (such as autonomous agents) into the government.142Bob Evans, “DOGE Triggering ‘Tremendous Opportunity’ for Workday Federal Business,” Cloud Wars, March 13, 2025, (<)a href='https://cloudwars.com/cloud/doge-triggering-tremendous-opportunity-for-workday-federal-business/'(>)https://cloudwars.com/cloud/doge-triggering-tremendous-opportunity-for-workday-federal-business(<)/a(>). Threatening to replace government administration with AI systems may demoralize federal workers and imperil millions of Americans’ livelihoods, but it’s fantastic for business.
In order to centralize government systems, DOGE has been given unrestricted access to people’s sensitive, private, and personally identifying data. This includes people’s entire credit histories, social security benefits payments, and tax data.143Center for Democracy and Technology et al., (<)em(>)DOGE and Government Data Privacy(<)/em(>), March 17, 2025, (<)a href='https://civilrights.org/wp-content/uploads/2025/03/DOGE-and-Government-Data-Privacy-FINAL.pdf'(>)https://civilrights.org/wp-content/uploads/2025/03/DOGE-and-Government-Data-Privacy-FINAL.pdf(<)/a(>); Aleem, “DOGE is Already Breaking Social Security”; Chas Danner, “All the Federal Agencies DOGE Has Gotten Access to,” (<)em(>)New York Magazine(<)/em(>), February 10, 2025, (<)a href='https://nymag.com/intelligencer/article/doge-elon-musk-what-federal-agencies-access-lawsuits.html'(>)https://nymag.com/intelligencer/article/doge-elon-musk-what-federal-agencies-access-lawsuits.html(<)/a(>). DOGE’s staff has also been given access to private market data—such as information contained in confidential investigations and actions taken by enforcement agencies—meaning that access to data that is highly desirable to private companies is easily accessible to a team of staffers run by Musk, a private, unelected citizen whose own suite of companies directly competes with the companies targeted in these actions and investigations.
Some of the worst consequences of DOGE—especially for everyday people—will not be immediately obvious until it is too late. Decimating public research infrastructure, for example, will undermine researchers and institutions that otherwise would be working on potentially life-changing technological breakthroughs. Dismantling the regulatory state, including agencies like the Consumer Finance Protection Bureau tasked with protecting ordinary Americans from predation by private companies, means fewer people will receive financial redress from corporate wrongdoing. DOGE has already tapped a former Tesla engineer to become the Chief Information Officer at the Department of Labor, the principal federal agency responsible for enforcing federal labor law, putting critical protections and material benefits like pensions at risk.144Brian Merchant, “DOGE’s ‘AI-First’ Strategist Is Now the Head of Technology at the Department of Labor,” (<)em(>)Blood in the Machine(<)/em(>) (blog), March 19, 2025, (<)a href='https://www.bloodinthemachine.com/p/doges-ai-first-strategist-is-now'(>)https://www.bloodinthemachine.com/p/doges-ai-first-strategist-is-now(<)/a(>); Natalie Alms, “TTS Director Tapped to Serve as Labor CIO,” Nextgov/FCW, March 18, 2025, (<)a href='https://www.nextgov.com/people/2025/03/tts-director-tapped-serve-labor-cio/403855'(>)https://www.nextgov.com/people/2025/03/tts-director-tapped-serve-labor-cio/403855(<)/a(>).
MyCity: A Case Study in AI for Austerity

In New York City, Mayor Eric Adams campaigned on claims of making the city government more efficient, promising to appoint an “efficiency czar” and develop a centralized technology portal called MyCity that would serve as a one-stop resource for New York City residents to access city services, apply for benefits, and access useful city information.145Samar Khurshid, “Eric Adam Vows to Overhaul How City Government Works; Experts Point to Several Essentials to Following Through,” (<)em(>)Gotham Gazette(<)/em(>), October 31, 2021, (<)a href='https://www.gothamgazette.com/city/10870-eric-adams-promises-overhaul-how-city-government-works-experts'(>)https://www.gothamgazette.com/city/10870-eric-adams-promises-overhaul-how-city-government-works-experts(<)/a(>). But the realities of MyCity have been far more sobering. In their report MyCity, INC: A Case Against ‘CompStat Urbanism,’ Cynthia Conti-Cook and Ed Vogel at Surveillance Resistance Lab articulate how MyCity embeds corporate technology into public infrastructure in ways that undermine democratic governance and deploy austerity measures.146Cynthia Conti-Cook and Ed Vogel, MyCity, INC: A Case Against “CompStat Urbanism” (New York: Surveillance Resistance Lab, March 18, 2024).
Since 2023, New York City has partnered with Microsoft’s Azure AI to build and launch a “MyCity Chatbot” that has provided false—sometimes criminal—information to people.147City of New York, “Mayor Adams Releases First-of-Its-Kind Plan for Responsible Artificial Intelligence Use in NYC Government,” October 16, 2023, (<)a href='https://www.nyc.gov/office-of-the-mayor/news/777-23/mayor-adams-releases-first-of-its-kind-plan-responsible-artificial-intelligence-use-nyc'(>)https://www.nyc.gov/office-of-the-mayor/news/777-23/mayor-adams-releases-first-of-its-kind-plan-responsible-artificial-intelligence-use-nyc(<)/a(>); Colin Lecher, “NYC’s AI Chatbot Tells Businesses to Break the Law,” (<)em(>)Markup(<)/em(>), March 29, 2025, (<)a href='https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law'(>)https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law(<)/a(>). For example, the chatbot told one user that it was legal to fire an employee if they file a sexual harassment complaint.148Jake Offenhartz, “NYC’s AI Chatbot Was Caught Telling Businesses to Break the Law. The City Isn’t Taking it Down,” Associated Press, April 3, 2024, (<)a href='https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21'(>)https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21(<)/a(>). (It is not.149New York State Human Rights Law, Section 296, Unlawful Discriminatory Practices.) As of writing, the chatbot still warns users that it “may occasionally provide incomplete or inaccurate responses.” A chatbot that requires users to fact-check its answers doesn’t benefit anyone in the name of efficiencey—except Microsoft, the company on the receiving end of the government contract.
Crucially, when the city attempted to understand the data the chatbot was trained on, Microsoft claimed that the training data was “proprietary to the vendor,” evading any accountability and leaving government oversight committees in the dark.150New York City Office of Technology and Innovation, “Summary of Agency Compliance Reporting of Algorithmic Tools,” 2023, (<)a href='https://www.nyc.gov/assets/oti/downloads/pdf/reports/2023-algorithmic-tools-reporting-updated.pdf'(>)https://www.nyc.gov/assets/oti/downloads/pdf/reports/2023-algorithmic-tools-reporting-updated.pdf(<)/a(>). Meanwhile, MyCity was accompanied by a state-wide legislative effort to allow agencies to cross-share data that would otherwise be restricted for the purposes of providing government benefits or services, raising broad privacy concerns.151One City Act, Assembly Bill A9642 (2023–2024), (<)a href='https://www.nysenate.gov/legislation/bills/2023/A9642'(>)https://www.nysenate.gov/legislation/bills/2023/A9642(<)/a(>).
As of 2025, MyCity has spent over $100 million on private contracts with technology vendors—most of which are located outside of New York—with few benefits to show New Yorkers.152Zachary Groz, “How Eric Adam’s MyCity Portal Became a $100 Million Question Mark,” (<)em(>)New York Focus(<)/em(>), March 19, 2025, (<)a href='https://nysfocus.com/2025/03/19/mycity-eric-adams-child-care'(>)https://nysfocus.com/2025/03/19/mycity-eric-adams-child-care(<)/a(>). Still, the city’s Chief Technology Officer hopes to expand MyCity and the faulty AI chatbot.153Annie McDonough, “Matt Fraser Still Wants to Expand MyCity and AI Chatbot,” (<)em(>)City & State New York(<)/em(>), March 19, 2025, (<)a href='https://www.cityandstateny.com/personality/2025/03/matt-fraser-still-wants-expand-mycity-and-ai-chatbot/403899/'(>)https://www.cityandstateny.com/personality/2025/03/matt-fraser-still-wants-expand-mycity-and-ai-chatbot/403899(<)/a(>). Such a decision makes no sense, unless you consider that the goal of projects like MyCity may not, in fact, be to serve citizens but instead to incentivize and centralize access to citizen data; privatize and outsource government work; and entrench corporate power without meaningful accountability mechanisms—another textbook example of AI serving as “double speak” for austerity.154Likhita Banerji and Damini Satija, “AI as Double Speak for Austerity,” (<)em(>)Tech Policy Press(<)/em(>), February 7, 2025, (<)a href='https://www.techpolicy.press/ai-as-double-speak-for-austerity/'(>)https://www.techpolicy.press/ai-as-double-speak-for-austerity(<)/a(>).

4. The Productivity Myth Obscures a Foundational Truth: The Benefits of AI Accrue to Companies, Not to Workers or the Public at Large

CEOs of AI companies have made assertions that the technology will lead to untold and transformative productivity growth. As OpenAI CEO Sam Altman said in an interview in June 2023: “I think the world will get way wealthier, we’ll have a productivity boom, and we will find a lot of new things to do. […] We can cure all diseases, we can get everybody a great education, better health care, massively increase productivity, huge scientific discovery, all of these wonderful things and we want to make sure that people get that benefit, and that benefit is distributed equitably.”155Satyan Gajwaniin, “AI Effect: We’ll Get Way Wealthier and Witness a Productivity Boom, Says Sam Altman,” (<)em(>)Economic Times(<)/em(>), June 9, 2023, (<)a href='https://economictimes.indiatimes.com/tech/technology/ai-effect-well-get-way-wealthier-and-witness-a-productivity-boom-says-sam-altman/articleshow/100857881.cms'(>)https://economictimes.indiatimes.com/tech/technology/ai-effect-well-get-way-wealthier-and-witness-a-productivity-boom-says-sam-altman/articleshow/100857881.cms(<)/a(>).
But the reality is far from equitable. For Altman, as for other leaders in AI and Big Tech, “productivity” is a euphemism for the mutually beneficial economic relationship between firms and their shareholders—not between firms and their workers. Not only are workers not benefitting from productivity gains from AI but for many, their conditions only get worse.
Instead of extending the gains to workers, AI is devaluing their labor, and making life more mundane and increasingly surveilled. These promises aren’t new: The idea that AI will lead to enhanced worker productivity has driven the deployment of algorithmic management techniques and “worker productivity” tools156Jodi Kantor and Arya Sundaram, “The Rise of the Worker Productivity Score,” (<)em(>)New York Times(<)/em(>), August 14, 2022, (<)a href='https://www.nytimes.com/interactive/2022/08/14/business/worker-productivity-tracking.html'(>)https://www.nytimes.com/interactive/2022/08/14/business/worker-productivity-tracking.html(<)/a(>). used across many sectors that rely on heavy surveillance of workers157Wilneida Negrón, (<)em(>)Little Tech Is Coming for Workers(<)/em(>), Coworker.org, 2021, (<)a href='https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf'(>)https://home.coworker.org/wp-content/uploads/2021/11/Little-Tech-Is-Coming-for-Workers.pdf(<)/a(>).—down to the micromovements of some workers’ muscles—to set rates and shape working conditions using AI.158Edward Ongweso Jr., “Amazon’s New Algorithm Will Set Workers’ Schedules According to Muscle Use,” (<)em(>)Vice(<)/em(>), April 15, 2021, (<)a href='https://www.vice.com/en/article/amazons-new-algorithm-will-set-workers-schedules-according-to-muscle-use'(>)https://www.vice.com/en/article/amazons-new-algorithm-will-set-workers-schedules-according-to-muscle-use(<)/a(>). For example, Amazon tracks warehouse workers down to the minute using AI and can fire workers if they accumulate thirty minutes of “time off task” on three separate days within a year.159Lauren Kaori Gurley, “Internal Documents Show Amazon’s Dystopian System for Tracking Workers Every Minute of Their Shifts,” (<)em(>)Vice(<)/em(>), June 2, 2022, (<)a href='https://www.vice.com/en/article/internal-documents-show-amazons-dystopian-system-for-tracking-workers-every-minute-of-their-shifts/'(>)https://www.vice.com/en/article/internal-documents-show-amazons-dystopian-system-for-tracking-workers-every-minute-of-their-shifts(<)/a(>). Time off task includes using the restroom, helping a coworker move a heavy package, or taking a break to cool off or warm up, even when warehouse temperatures are extreme.160Warehouse Worker Resource Center, “Extreme Heat at Amazon Air,” September 2022, (<)a href='https://warehouseworkers.org/wwrc_resources/extreme-heat-at-amazon-air'(>)https://warehouseworkers.org/wwrc_resources/extreme-heat-at-amazon-air(<)/a(>). Rather than address worker demands for improved conditions, Amazon announced plans to introduce “wellness chambers” into warehouses where workers could watch videos on relaxation.161“Amazon Offers ‘Wellness Chamber’ for Stressed Staff,” BBC, May 28, 2021, (<)a href='https://www.bbc.com/news/technology-57287151'(>)https://www.bbc.com/news/technology-57287151(<)/a(>).
This relentless tracking has led to increased rates of workplace injuries. In late 2024, a Senate investigation found that Amazon’s algorithmically driven productivity quotas lead to significantly higher injury rates than the industry average and in non-Amazon warehouses.162U.S. Congress, Senate, Committee on Health, Education, Labor, and Pensions, “The ‘Injury-Productivity Trade-off’: How Amazon’s Obsession with Speed Creates Uniquely Dangerous Warehouses,” 118th Cong., 2d. sess., 2024. (<)a href='https://www.help.senate.gov/imo/media/doc/amazon_investigation.pdf'(>)https://www.help.senate.gov/imo/media/doc/amazon_investigation.pdf(<)/a(>). In fact, the report found that Amazon workers were almost twice as likely to be injured and that Amazon manipulated its workplace safety data to appear safer than it was.163Juliana Kim, “Amazon Manipulated Injury Data to Make Warehouses Appear Safer, a Senate Probe Finds,” NPR, December 16, 2024, (<)a href='https://www.npr.org/2024/12/16/nx-s1-5230240/amazon-injury-warehouse-senate-investigation'(>)https://www.npr.org/2024/12/16/nx-s1-5230240/amazon-injury-warehouse-senate-investigation(<)/a(>).
Furthermore, the algorithmically determined rates at which workers are expected to perform never slow down; instead, work becomes “gamified,” with the next reward always just out of reach—and always to the benefit of the company. Amazon warehouse workers report a cyclical approach where the rate set for them becomes unsustainable and they’re fired, only to be rehired at the bottom of the totem pole again with the wages reset accordingly. Amazon’s treatment of its workers reflects the future embraced by AI-driven technology companies wherein automation technologies are integrated across the entire labor supply chain, with little regard for the human effects of automation.
This is by design, so that companies can commodify labor into a product that can be automated and sold for profit, treating workers’ craft as disposable. This is why, regardless of the actual efficacy of AI technology, fears about displacement by AI are justified: Companies use the logic of AI’s “productivity gains” to justify the fissuring, automation, and, in some cases, the elimination of work. For example, in late 2024, fintech company Klarna boasted that it used AI to drive company cost-savings by cutting its sales and marketing teams, shifting to AI-powered engineering, and replacing its customer service teams with an OpenAI customer service chatbot that could “do the work of 700 humans.”164Theo Wayt, “Klarna’s AI Savings” (<)em(>)The Information(<)/em(>), November 25, 2024, (<)a href='https://www.theinformation.com/articles/klarnas-ai-savings'(>)https://www.theinformation.com/articles/klarnas-ai-savings(<)/a(>). By 2025, the company had reduced its employee headcount by 38 percent. Klarna’s turn to AI is presumably driven by its plans to IPO and the hope that the cost savings will attract potential shareholders and drive a higher opening price.
The logic that corporate productivity will inherently lead to shared prosperity is deeply flawed.165Daron Acemoglu and Simon Johnson, (<)em(>)Power and Progress: Our Thousand-Year Struggle Over Technology & Prosperity(<)/em(>) (PublicAffairs, 2023). In past eras when automation led to productivity gains and higher wages, it was not because of the technology’s inherent capabilities, but because corporate and regulatory policies were designed in tandem to support workers and curb corporate power.166Acemoglu and Johnson, (<)em(>)Power and Progress(<)/em(>), 9–38. The boom in machine-tool automation around World War II is instructive: Despite fears of job loss, federal policies and a strengthened labor movement protected workers’ interests and demanded higher wages for workers operating new machinery.167John F. Kennedy Presidential Library and Museum, News Conference 24, February 14, 1962, (<)a href='https://www.jfklibrary.org/archives/other-resources/john-f-kennedy-press-conferences/news-conference-24'(>)https://www.jfklibrary.org/archives/other-resources/john-f-kennedy-press-conferences/news-conference-24(<)/a(>); “Kennedy Began War on Poverty; Area Redevelopment First of His Weapons 3 Years Ago,” (<)em(>)New York Times(<)/em(>), January 10, 1964, (<)a href='https://www.nytimes.com/1964/01/10/archives/kennedy-began-war-on-poverty-area-redevelopment-first-of-his.html'(>)https://www.nytimes.com/1964/01/10/archives/kennedy-began-war-on-poverty-area-redevelopment-first-of-his.html(<)/a(>). Corporations in turn instituted policies to retain workers—like redistributing profits and providing training—to reduce turmoil and avert strikes. As a result, despite growing automation during this period, workers’ share of national income remained steady, average wages grew, and demand for workers increased.168Acemoglu and Johnson, (<)em(>)Power and Progress(<)/em(>), 242, 247. It is important to note that women, workers of color, and immigrants did not see similar gains due to insidious forces of patriarchy, racism, and xenophobia. See Acemoglu and Johnson, (<)em(>)Power and Progress(<)/em(>), 251. These gains were rolled back under Reagan-era policies that prioritized shareholder interests, used trade threats to depress labor and regulatory standards,169Amba Kak and Sarah Myers West, “International ‘Digital Trade’ Agreements: The Next Frontier,” in (<)em(>)2023 Landscape: Confronting Tech Power(<)/em(>), AI Now Institute, (<)a href='https://ainowinstitute.org/publications/international-digital-trade-agreements'(>)https://ainowinstitute.org/publications/international-digital-trade-agreements(<)/a(>). and weakened pro-worker and union policies, all of which enabled tech firms to amass market dominance and control over key resources.170Susannah Glickman, “AI and Tech Industrial Policy: From Post-Cold War Post-Industrialism to Post-Neoliberal Re-Industrialization,”in (<)em(>)AI Nationalism(s): Global Industrial Policy Approaches to AI(<)/em(>), AI Now Institute, March 12, 2024, (<)a href='https://ainowinstitute.org/publications/ai-and-tech-industrial-policy-from-post-cold-war-post-industrialism-to-post-neoliberal-re-industrialization'(>)https://ainowinstitute.org/publications/ai-and-tech-industrial-policy-from-post-cold-war-post-industrialism-to-post-neoliberal-re-industrialization(<)/a(>). The AI industry is a decisive product of this history.
The introduction of algorithmic wage discrimination is reflective of both the fissuring of work and surveillance capitalism: Firms pay individual workers different wages for the same work based on algorithmic processing of numerous data points, including demand, location, or worker behavior.171Veena Dubal(<)em(>), (<)/em(>)“On Algorithmic Wage Discrimination,” (<)em(>)Columbia Law Review(<)/em(>) 124, no. 7 (2023), (<)a href='https://doi.org/10.2139/ssrn.4331080'(>)https://doi.org/10.2139/ssrn.4331080(<)/a(>). The lack of transparency into how wages are calculated leads to wage precarity, as workers cannot reliably predict how much they will make on a given day for the same work. Algorithmic wage setting only goes in one direction for workers: down. Rideshare companies, for example, can coerce drivers into accepting low wages by threatening to send them to the back of lengthy queues or deactivate their account.
It’s not only rideshare drivers who are affected: A recent Roosevelt Institute study by Katie J. Wells and Funda Ustek Spilda on on-demand nursing services showed that nurses on these platforms are forced to bid against one another for shifts, creating a “race to the bottom” for wages.172Katie J. Wells and Funda Ustek Spilda, (<)em(>)Uber for Nursing: How an AI-Powered Gig Model Is Threatening Health Care(<)/em(>), Roosevelt Institute, December 2024, (<)a href='https://rooseveltinstitute.org/wp-content/uploads/2024/12/RI_Uber-for-Nursing_Brief_202412.pdf'(>)https://rooseveltinstitute.org/wp-content/uploads/2024/12/RI_Uber-for-Nursing_Brief_202412.pdf(<)/a(>). Furthermore, of the eleven largest platform economy companies examined in Fairwork’s report on algorithmic management, only one ensured that workers on its platform meet the federal minimum wage requirements.173Katie J. Wells et al., (<)em(>)Fairwork US Ratings 2025: When AI Eats the Manager(<)/em(>),(<)em(>) (<)/em(>)Fairwork, 2025, 6.
Beyond wage exploitation, these platform economy companies are undermining corporate accountability at the expense of worker stability and well-being.174Wells et al., (<)em(>)When AI Eats the Manager.(<)/em(>) Fairwork maps these tactics—which include replacing management roles traditionally occupied by humans with chatbots; placing workers in sensitive settings, such as nursing homes, without any supervision on site; failing to provide any phone number or contact information for workers to contact in the event of an emergency—and arbitrary deactivations, which cut workers off not only from their livelihood but from all information about their clients, workplaces, and work history without any meaningful avenue of redress. While many platforms have contracts stipulating that workers can be terminated for any reason, larger platform economies have taken this one step further: Instacart, an on-demand grocery delivery service, has joined a lawsuit against a Seattle law that requires companies to give fourteen-day notice of deactivation to workers based on reasonable policies—undermining labor protections for their own workers and other platform economy workers in the process.175Wells et al., (<)em(>)When AI Eats the Manager, (<)/em(>)18. Contracts also limit workers’ ability to bring legal claims against the platforms, with widespread use of liability clauses, class action waivers, and arbitration clauses.
These moves are a feature, not a bug, of platform-economy business models. As Edward Ongweso Jr. writes: “For years, gig companies have pushed for mandatory arbitration because it is incredibly good at stymying class action lawsuits and legal precedents—again, if your business model relies on skirting the law, regulatory arbitrage, and aggressive lobbying, you need to stop angry workers in deplorable conditions from collectively demanding the right to earn a livable wage or be safe in the course of their work or get health insurance or other indications of dignified work.”176Edward Ongweso Jr., “Uber’s Bastards II,” (<)em(>)The Tech Bubble (<)/em(>)(blog), March 27, 2025, (<)a href='https://thetechbubble.substack.com/p/ubers-bastards-ii'(>)https://thetechbubble.substack.com/p/ubers-bastards-ii(<)/a(>).
AI Agents

The promise that AI Agents will automate rote tasks has become a focal point for product development: In a recent blog post, OpenAI founder Sam Altman said he believes that in 2025 the first AI agents will “join the workforce,” working autonomously in ways that replace human activities.177Sam Altman, “Reflections,” January 5, 2025, (<)a href='https://blog.samaltman.com/reflections'(>)https://blog.samaltman.com/reflections(<)/a(>). OpenAI has also released Operator, an agent that it claims can be tasked to fill out forms, order groceries, and perform other browser-based tasks178OpenAI, “Introducing Operator,” January 23, 2025, (<)a href='https://openai.com/index/introducing-operator/'(>)https://openai.com/index/introducing-operator(<)/a(>). (though it has been largely unsuccessful so far),179Casey Newton, “OpenAI Launches its Agent,” (<)em(>)Platformer(<)/em(>), January 23, 2025, (<)a href='https://www.platformer.news/openai-operator-ai-agent-hands-on/'(>)https://www.platformer.news/openai-operator-ai-agent-hands-on(<)/a(>). and Deep Research, which conducts literature reviews using web-based information;180Robert Krzaczyński, “OpenAI Launches Deep Research: Advancing AI-Assisted Investigation,” InfoQ, February 6, 2025, (<)a href='https://www.infoq.com/news/2025/02/deep-research-openai/'(>)https://www.infoq.com/news/2025/02/deep-research-openai(<)/a(>). Google has also released a tool called Deep Research that performs similar tasks,181Dave Citron, “Try Deep Research and Our New Experimental Model in Gemini, Your AI Assistant,” Google, December 11, 2024, (<)a href='https://blog.google/products/gemini/google-gemini-deep-research/'(>)https://blog.google/products/gemini/google-gemini-deep-research(<)/a(>). and Palantir is piloting a studio that enables its customers to build their own agents on top of its Ontology platform.182Palantir, “AIP Agent Studio [Beta],” accessed April 25, 2025, (<)a href='https://www.palantir.com/docs/foundry/agent-studio/overview'(>)https://www.palantir.com/docs/foundry/agent-studio/overview(<)/a(>).
But “agentic” AI introduces multiplying levels of risk. All of the flaws of LLMs are retained with these systems, yet they’re positioned to act more autonomously, with greater complexity, and in ways that render human oversight even more facile.183Margaret Mitchell et al., “Why Handing Over Total Control to AI Agents Would Be a Huge Mistake,” (<)em(>)Technology Review(<)/em(>), March 24, 2025, (<)a href='https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake'(>)https://www.technologyreview.com/2025/03/24/1113647/why-handing-over-total-control-to-ai-agents-would-be-a-huge-mistake(<)/a(>). As Signal President Meredith Whittaker has outlined, AI agents pose particular risks to user privacy, giving root access to private information in order for the agent to act autonomously: “There’s a profound issue with security and privacy that is haunting this hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services [and] muddying their data,” she said.184Sara Perez, “Signal President Mereith Whittaker Calls Out Agentic AI as Having ‘Profound’ Security and Privacy Issues,” (<)em(>)TechCrunch(<)/em(>), March 7, 2025, (<)a href='https://techcrunch.com/2025/03/07/signal-president-meredith-whittaker-calls-out-agentic-ai-as-having-profound-security-and-privacy-issues/'(>)https://techcrunch.com/2025/03/07/signal-president-meredith-whittaker-calls-out-agentic-ai-as-having-profound-security-and-privacy-issues(<)/a(>).
In order for these systems to “work,” organizations will have to change in some pretty fundamental ways: For one, AI agents require highly structured pools of data to work with.185Ben Thompson, “Enterprise Philosophy and the First Wave of AI,” (<)em(>)Stratechery(<)/em(>), September 24, 2024, (<)a href='https://stratechery.com/2024/enterprise-philosophy-and-the-first-wave-of-ai/'(>)https://stratechery.com/2024/enterprise-philosophy-and-the-first-wave-of-ai(<)/a(>). This means that companies using agents will have to “datify” their practices, taking their activities and rendering them into discrete data objects on which AI systems can be trained. To do this, organizations will need to become more bureaucratic and more surveillant—the opposite of the promises of greater autonomy typically associated with the tech industry.

5. AI Use Is Frequently Coercive,
Violating Rights and
Undermining Due Process

While the entry point to AI for many people is through systems like ChatGPT, more frequently we are interacting with AI technologies used not by us but on us, which shape our access to resources in realms from finance to hiring to housing. But because these types of AI systems are typically used by powerful institutional actors, they benefit from information asymmetries. And even more so than other types of technologies, it is the deployment of AI in these contexts that is especially and often coercive, offering little transparency to those subject to these technologies and no meaningful ability to opt out.
The rapid implementation of AI systems into critical social infrastructures thus raises imminent concerns about the violation of numerous rights and laws critical to realizing justice, freedom, and dignity, individually and collectively, including due process, privacy, and civil rights. This is nowhere more clear than the rise of AI usage in immigration enforcement, where human rights abuses are common and legal norms are routinely violated—even before AI is in the mix.
Immigration enforcement requires a significant amount of data collection and processing: The files from a single application for immigration can amount to thousands of pages and include complex legal rulings and other documentation. As a result, immigration agencies housed under the Department of Homeland Security have rapidly integrated AI technologies across a range of aspects of immigration enforcement, but with little-to-no meaningful oversight. And while DHS has published a set of principles regarding its responsible use of AI, immigration advocates have found that the agency routinely skirts those obligations.186Julie Mao et al., (<)em(>)Automating Deportation: The Artificial Intelligence Behind the Department of Homeland Security’s Immigration Enforcement Regime(<)/em(>), Just Futures Law and Mijente, June 2024, (<)a href='https://mijente.net/wp-content/uploads/2024/06/Automating-Deportation.pdf'(>)https://mijente.net/wp-content/uploads/2024/06/Automating-Deportation.pdf(<)/a(>).
For example, the United States Citizenship and Immigration Services (USCIS) uses predictive tools to automate the agency’s decision-making, like “Asylum Text Analytics,” which queries asylum and withholding applications to deem which applications are fraudulent. These tools have demonstrated, among other flaws, high rates of misclassification when used on language from non-native English speakers.187Weixin Liang et al., “GPT Detectors Are Biased Against Non-Native English Writers,” (<)em(>)arXiv, (<)/em(>)July 10, 2023, (<)a href='https://arxiv.org/abs/2304.02819'(>)arXiv:2304.02819(<)/a(>). And the consequences of erroneous identification of fraud are significant: It can lead to deportation, lifelong bans from the United States, and imprisonment for up to ten years. Still, there is little-to-no transparency for those on whom these systems are used, no ability to opt out or seek remediation when they are used to make erroneous decisions, and—just as importantly—little evidence that the tools’ effectiveness has been, or can be, improved.
Immigration and Customs Enforcement (ICE) also uses predictive analytics and risk assessment to make determinations about detention and release: For example, a “hurricane score” is used by the agency to decide the terms of electronic surveillance based on predictions of how likely an individual is to “abscond,” or the level of “threat to the community” they face. A “risk calibration score” used to shape detention decisions was set at multiple junctures by ICE so that it would not recommend release for anyone it was used on, leading the agency to be sued for violating due process rights under the Fifth Amendment in 2020.188Adi Robertson, “ICE Rigged Its Algorithms to Keep Immigrants in Jail, Claims Lawsuit,” (<)em(>)Verge,(<)/em(>) Mar. 3, 2020,(<)a href='https://www.theverge.com/2020/3/3/21163013/ice-new-york-risk-assessment-algorithm-rigged-lawsuit-nyclu-josevelesaca'(>)https://www.theverge.com/2020/3/3/21163013/ice-new-york-risk-assessment-algorithm-rigged-lawsuit-nyclu-josevelesaca(<)/a(>).
A database Palantir has produced for ICE called Investigative Case Management (ICM) is used in the process of making these determinations, including deciding which people to target for arrest during immigration raids.189Mijente, (<)em(>)The War Against Immigrants(<)/em(>),(<)em(>) (<)/em(>)August 2019, 4, (<)a href='https://mijente.net/wp-content/uploads/2019/08/Mijente-The-War-Against-Immigrants_-Trumps-Tech-Tools-Powered-by-Palantir_.pdf'(>)https://mijente.net/wp-content/uploads/2019/08/Mijente-The-War-Against-Immigrants_-Trumps-Tech-Tools-Powered-by-Palantir_.pdf(<)/a(>); National Immigration Project et al., (<)em(>)Who’s Behind ICE? The Tech and Data Companies Fueling Deportations(<)/em(>), October 2018, 3, (<)a href='https://surveillanceresistancelab.org/resources/whos-behind-ice/'(>)https://surveillanceresistancelab.org/resources/whos-behind-ice(<)/a(>). ICM is a sprawling tool that enables filtering and querying of a database containing hundreds of categories, including “unique physical characteristics,” “criminal affiliation,” “bankruptcy filings,” and “place of employment,” among others, enabling agents to build reports on targets.190Jason Koebler, “Inside a Powerful Database ICE Uses to Identify and Deport People,” (<)em(>)404 Media(<)/em(>), April 9, 2025, (<)a href='https://www.404media.co/inside-a-powerful-database-ice-uses-to-identify-and-deport-people/'(>)https://www.404media.co/inside-a-powerful-database-ice-uses-to-identify-and-deport-people(<)/a(>). In 2022, Palantir announced a five-year contract with the Department of Homeland Security worth $95.9 million to maintain the system.191Palantir, “Homeland Security Investigations Renews Partnership with Palantir for Case Management Software,” press release, September 26, 2022, (<)a href='https://www.palantir.com/newsroom/press-releases/homeland-security-investigations-renews-partnership-with-palantir'(>)https://www.palantir.com/newsroom/press-releases/homeland-security-investigations-renews-partnership-with-palantir(<)/a(>). It connects to other government databases including records of all people admitted on a student visa, real-time maps of ICE’s location-tracking tools, and location data from license plate readers, among other sources of information.192Koebler, “Inside a Powerful Database ICE Uses to Identify and Deport People.” A new contract Palantir announced with DHS would add on a new platform, ImmigrationOS, by September 2025 that would give ICE “near real-time visibility” on people self-deporting from the US,193Alayna Alvarez, “Palantir’s Partnership with ICE Deepens,” (<)em(>)Axios(<)/em(>), May 1, 2025, (<)a href='https://www.axios.com/local/denver/2025/05/01/palantir-deportations-ice-immigration-trump'(>)https://www.axios.com/local/denver/2025/05/01/palantir-deportations-ice-immigration-trump(<)/a(>). a “master database” that will purportedly integrate data from the Social Security Administration, the IRS, and Health and Human Services, including data obtained by DOGE teams without observing legal or procedural requirements.194U.S. House of Representatives, Committee on Oversight and Government Reform, “Letter to Michelle L. Anderson, Assistant Inspector General for Audit, Social Security Administration,” April 17, 2025, (<)a href='https://oversightdemocrats.house.gov/sites/evo-subsites/democrats-oversight.house.gov/files/evo-media-document/2025-04-17.gec-to-ssa-oig-master-data.pdf'(>)https://oversightdemocrats.house.gov/sites/evo-subsites/democrats-oversight.house.gov/files/evo-media-document/2025-04-17.gec-to-ssa-oig-master-data.pdf(<)/a(>).
The federal government has also invested significantly in facial-recognition technology—overwhelmingly used to surveil and track immigrants, asylum seekers, and activists—found to be biased and error-prone. For example, Clearview AI, a facial-recognition company that was created with the intention to deploy surveillance technology against immigrants, people of color, and the political left, has received almost $4 million in contracts with ICE despite being sued in numerous states.195Luke O’Brien, “The Shocking Far-Right Agenda Behind the Facial Recognition Tech Used by ICE and the FBI,” (<)em(>)Mother Jones(<)/em(>), 2025, (<)a href='https://www.motherjones.com/politics/2025/04/clearview-ai-immigration-ice-fbi-surveillance-facial-recognition-hoan-ton-that-hal-lambert-trump/'(>)https://www.motherjones.com/politics/2025/04/clearview-ai-immigration-ice-fbi-surveillance-facial-recognition-hoan-ton-that-hal-lambert-trump(<)/a(>). Reporting from the US Government Accountability Office shows that between April 2018 and March 2022, Clearview was used by more federal law enforcement agencies than any other private company–including by the US Postal Inspection Service, which used Clearview to target Black Lives Matter protesters; and Customs and Border Protection (CBP), which uses a facial-recognition app to screen asylum seekers.196Ibid.
Despite the dubious legality and known flaws of many of these systems, the integration of AI into immigration enforcement seems only poised to escalate: Acting ICE Director Todd Lyons has expressed aspirations for the agency to run deportations like “Amazon Prime for human beings,” and to use AI to “free up [detention] bed space” and “fill up airplanes.”197Marina Dunbar, “Ice Director Wants to Run Deportations like ‘Amazon Prime for Human Beings’,” (<)em(>)Guardian(<)/em(>), April 9, 2025, (<)a href='https://www.theguardian.com/us-news/2025/apr/09/ice-todd-lyons-deporation-amazon'(>)https://www.theguardian.com/us-news/2025/apr/09/ice-todd-lyons-deporation-amazon(<)/a(>). DHS is also experimenting with facial recognition technology to track migrant children—potentially from infancy—and monitor them as they grow up.198Eileen Guo, “The US Wants to Use Facial Recognition to Identify Migrant Children as They Age,” (<)em(>)Technology Review(<)/em(>), August 14, 2024, (<)a href='https://www.technologyreview.com/2024/08/14/1096534/homeland-security-facial-recognition-immigration-border'(>)https://www.technologyreview.com/2024/08/14/1096534/homeland-security-facial-recognition-immigration-border(<)/a(>).
The use of these tools provides a veneer of objectivity that masks not only outright racism and xenophobia, but also the steep political pressure on immigration agencies to restrict asylum, humanitarian relief, and other forms of immigration—pressures that predate the Trump administration but have sharply escalated since.199Mao et al., “Automating Deportation,” 16. As immigration experts at Just Futures put it, “regardless of how ‘accurate’ the AI program is at recommending detention and deportation, in the hands of immigration and policing agencies, the technology furthers a fundamentally violent mission of targeting communities for detention and deportation.”200Ibid.
Moreover, AI allows federal agencies to conduct immigration enforcement in ways that are profoundly and increasingly opaque, making it even more difficult for those who may be falsely caught or accused to extricate themselves. Many of these tools are only known to the public because of legal filings, and are undisclosed in DHS’s AI inventory. But even once they are known, we still have very little information about how they are calibrated or what data they are trained on, which further diminishes the ability of individuals to assert their due process rights. These tools also rely on invasive surveillance of the public, from the screening of social media posts; to use of facial recognition, aerial surveillance, and other monitoring techniques; to the purchase of bulk information about the public from data brokers like LexisNexis.201Sam Biddle, “LexisNexis Is Selling Your Personal Data to ICE So It Can Try to Predict Crimes,” (<)em(>)Intercept(<)/em(>), June 20, 2023, (<)a href='https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates'(>)https://theintercept.com/2023/06/20/lexisnexis-ice-surveillance-license-plates(<)/a(>).
Understanding the coercive nature of these systems is important for understanding how so many individuals who are not in violation of the law, and who in some cases are naturalized US citizens and green card holders, have been unlawfully detained by immigration authorities. Most concerningly, the opacity of AI systems helps prevent the remediation of these harms. But maybe that failure is part of the point. AI systems, even when faulty, act in the interest of enforcers who need to meet targets for deportation and arrest—whatever the means, whatever the cost. It’s all good for business.