
The “common sense” around artificial intelligence has become potent over the past two years, imbuing the technology with a sense of agency and momentum that make the current trajectory of AI appear inevitable, and certainly essential for economic prosperity and global dominance for the US. In this section, we break down the narratives propping up this “inevitability,” explaining why it is particularly challenging—but still necessary—to contest the current trajectory of AI, especially at this moment in global history.

The promise that artificial general intelligence, or “AGI,” is hovering just over the horizon is tilting the scales for many of the debates about how AI is affecting society. AI firms investing in the development of very large models at scale constantly assert that AGI is months or weeks away,1Sam Altman, “Reflections,” January 5, 2025, (<)a href='https://blog.samaltman.com/reflections'(>)https://blog.samaltman.com/reflections(<)/a(>); Sébastien Bubeck et al., “Sparks of Artificial General Intelligence: Early experiments with GPT-4,” Microsoft, March 2023, (<)a href='https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4'(>)https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4(<)/a(>). poised to have transformative effects on society at large—making this central to their sales pitch for investment.2See Brian Merchant, “AI Generated Business,” AI Now Institute, December 2024, (<)a href='https://ainowinstitute.org/general/ai-generated-business'(>)https://ainowinstitute.org/general/ai-generated-business(<)/a(>); Berber Jin and Deepa Seetharaman, “This Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion,” (<)em(>)Wall Street Journal(<)/em(>), March 4, 2025, (<)a href='https://www.wsj.com/tech/ai/ai-safe-superintelligence-startup-ilya-sutskever-openai-2335259b'(>)https://www.wsj.com/tech/ai/ai-safe-superintelligence-startup-ilya-sutskever-openai-2335259b(<)/a(>); and John Koetsier, “OpenAI CEO Sam Altman: ‘We Know How To Build AGI,’” (<)em(>)Forbes(<)/em(>), January 6, 2025, (<)a href='https://www.forbes.com/sites/johnkoetsier/2025/01/06/openai-ceo-sam-altman-we-know-how-to-build-agi'(>)https://www.forbes.com/sites/johnkoetsier/2025/01/06/openai-ceo-sam-altman-we-know-how-to-build-agi(<)/a(>). The discourse around AGI adds a veneer of inevitability to conversations about AI; if one company doesn’t achieve it, another will. This also gives governments an excuse to sit on their hands even as current versions of AI have profound effects on their constituents, as though the race to create AGI has its own momentum.
If anything, under both the Biden and Trump administrations, the US government has instead positioned itself as chief enabler: ready to wield every tool at its disposal—including investment, executive authority, and regulatory inaction—to push American AI firms ahead of their competitors in this race to AGI.3See Ezra Klein, “The Government Knows A.G.I. Is Coming,” (<)em(>)New York Times(<)/em(>), March 4, 2025, (<)a href='https://www.nytimes.com/2025/03/04/opinion/ezra-klein-podcast-ben-buchanan.html'(>)https://www.nytimes.com/2025/03/04/opinion/ezra-klein-podcast-ben-buchanan.html(<)/a(>); and White House, “Removing Barriers to American Leadership in Artificial Intelligence,” January 23, 2025, (<)a href='https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence'(>)https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence(<)/a(>). It’s worth noting that those most vocal about their fears about the so-called “existential risks” posed by AGI have done as much to prop up and speed along industry development as anything or anyone else.4Will Douglas Heaven, “Geoffrey Hinton Tells Us Why He’s Now Scared of the Tech He Helped Build,” (<)em(>)MIT Technology Review(<)/em(>), May 2, 2023, (<)a href='https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai'(>)https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai(<)/a(>). OpenAI’s assertion that “it’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly”5OpenAI, “Introducing OpenAI,” December 11, 2015, (<)a href='https://openai.com/index/introducing-openai'(>)https://openai.com/index/introducing-openai(<)/a(>). drives home that the AI boosters and the existential (“x-risk”) fearmongering both play a role in propping up this vision of AI with supreme capabilities.

What Is AGI? The History of Artificial General Intelligence

As Brian Merchant chronicles in his report “AI Generated Business,” the term AGI, coined in 1997, captured the notion of a “general intelligence” as a counterpoint to the then-dominant current in AI research, “expert systems,” which operated on rule-based logic designed as a formalized representation of how humans think.6Merchant, “AI Generated Business.” Where expert systems only worked in the narrowest of applications, AGI would operate broadly across a wide range of domains. But developers in the field largely ditched these ways of thinking about AI, turning instead to deep-learning techniques that proved more effective and that form the basis of today’s automated decision-making systems, among others.
Interest in AGI was revived in the 2010s when companies like OpenAI seized on the term, first as shorthand for a form of machine intelligence intended to rival and eventually surpass human intelligence, and later as a term “central to their marketing efforts.”7Merchant, “AI Generated Business.” The images invoked by AI firms is instructive, from Anthropic founder Dario Amodei’s use of the sublime imagery of “geniuses in a data center” capable of paradigm-changing scientific leaps like “designing new weapons or curing diseases,”8Dario Amodei, “Machines of Loving Grace,” October 2024, (<)a href='https://darioamodei.com/machines-of-loving-grace'(>)https://darioamodei.com/machines-of-loving-grace(<)/a(>). to the straightforwardly commercial logic underpinning OpenAI’s agreement with Microsoft: AGI is when AI can create $100 billion in profits.9Maxwell Zeff, “Microsoft and OpenAI Have a Financial Definition of AGI: Report,” (<)em(>)TechCrunch(<)/em(>), December 26, 2024, (<)a href='https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report'(>)https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report(<)/a(>).
In this sense, ChatGPT walked so that AGI could run; the current crop of LLMs in the consumer market are examples of brilliant marketing—proof, as AI firms argue, that big, unexpected advancements in AI were not only possible but “just around the corner.”10Kevin Roose, “Powerful A.I. Is Coming. We’re Not Ready,” (<)em(>)New York Times(<)/em(>), March 14, 2025, (<)a href='https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html#:~:text=I%20believe%20that%20very%20soon,tasks%20a%20human%20can%20do.%E2%80%9D'(>)https://www.nytimes.com/2025/03/14/technology/why-im-feeling-the-agi.html(<)/a(>). AGI has since been positioned as the next big step in the LLM advancement trajectory, albeit with little proof, beyond speculation, of how far or wide this leap will have to be.11See Anil Ananthaswamy, “How Close Is AI to Human-Level Intelligence?” (<)em(>)Nature(<)/em(>), December 3, 2024, (<)a href='https://www.nature.com/articles/d41586-024-03905-1'(>)https://www.nature.com/articles/d41586-024-03905-1(<)/a(>) (highlighting OpenAI’s claims that “o1 works in a way that is closer to how a person thinks than do previous LLMs”); Ryan Browne, “AI that can match humans at any task will be here in five to 10 years, Google DeepMind CEO says,” (<)em(>)CNBC(<)/em(>), March 17, 2025, (<)a href='https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html#:~:text=AI%20that%20can%20match%20humans,years%2C%20Google%20DeepMind%20CEO%20says&text=Google%20DeepMind%20CEO%20Demis%20Hassabis,smart%20or%20smarter%20than%20humans'(>)https://www.cnbc.com/2025/03/17/human-level-ai-will-be-here-in-5-to-10-years-deepmind-ceo-says.html(<)/a(>); Koetsier, “OpenAI CEO Sam Altman”; and Steve Ranger, “OpenAI Says It’s Charting a ‘Path to AGI’ with Its Next Frontier AI Model,” ITPro, May 30, 2024, (<)a href='https://www.itpro.com/technology/artificial-intelligence/openai-says-its-charting-a-path-to-agi-with-its-next-frontier-ai-model'(>)https://www.itpro.com/technology/artificial-intelligence/openai-says-its-charting-a-path-to-agi-with-its-next-frontier-ai-model(<)/a(>). However, while this belief seems to be spreading among the general public, it is widely contradicted by many within the AI research community. For instance, in a recent survey of members of the Association for the Advancement of AI, 84 percent of respondents said that the neural net architectures that large models rely on are “insufficient to achieve AGI.”12Association for the Advancement of Artificial Intelligence (AAAI), (<)em(>)AAAI 2025 Presidential Panel on the Future of AI Research(<)/em(>), March 2025, (<)a href='https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf'(>)https://aaai.org/wp-content/uploads/2025/03/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf(<)/a(>). In another, more fundamental, debunking of AGI claims, scholars like Emily Bender13Emily M. Bender, “Resisting Dehumanization in the Age of ‘AI’,” (<)em(>)Current Directions in Psychological Science(<)/em(>) 33, no. 2 (2024): (<)a href='https://doi.org/10.1177/09637214231217286'(>)https://doi.org/10.1177/09637214231217286(<)/a(>). and Henry Farrell,14Henry Farrell et al., “Large AI models Are Cultural and Social Technologies,” (<)em(>)Science(<)/em(>) 387, no. 6739 (2025): 1153–56, (<)a href='https://doi.org/10.1126/science.adt9819'(>)https://doi.org/10.1126/science.adt9819(<)/a(>). among others, have contested the basis of claims to AGI, arguing instead that large models can “never be intelligent in the way that humans, or even bumble-bees,”15Henry Farrell, “Should AGI-Preppers Embrace DOGE?” (<)em(>)Programmable Mutter(<)/em(>) (blog), March 18, 2025, (<)a href='https://www.programmablemutter.com/p/should-agi-preppers-embrace-doge'(>)https://www.programmablemutter.com/p/should-agi-preppers-embrace-doge.(<)/a(>) are because AI cannot, in fact, create. It can only reflect, compress, even remix content that humans have already created to help people to coordinate or solve problems.16Prithvi Iyer, “What Do We Mean When We Say ‘Artificial General Intelligence?’” (<)em(>)Tech Policy Press(<)/em(>), February 13, 2024, (<)a href='https://www.techpolicy.press/what-do-we-mean-when-we-say-artificial-general-intelligence'(>)https://www.techpolicy.press/what-do-we-mean-when-we-say-artificial-general-intelligence(<)/a(>), citing Borhane Blili-Hamelin, Leif Hancox-Li, and Andrew Smart, “Unsocial Intelligence: An Investigation of the Assumptions of AGI Discourse,” (<)em(>)arXiv(<)/em(>), (2024), (<)a href='https://arxiv.org/abs/2401.13142'(>)arXiv:2401.13142(<)/a(>).
While current AI models make the promise of AGI more tangible for policymakers and the general public, AGI is conveniently distanced from the fundamental and persistent limitations of LLMs on the ground that AGI, by definition, will be a wholly new paradigm that leapfrogs these material concerns.17See Klein, “The Government Knows A.G.I. Is Coming”; Jakob Nielsen, “AI Hallucinations on the Decline,” (<)em(>)Jakob Nielsen on UX(<)/em(>) (blog), February 13, 2025, (<)a href='https://jakobnielsenphd.substack.com/p/ai-hallucinations'(>)https://jakobnielsenphd.substack.com/p/ai-hallucinations(<)/a(>); and Josh Tyrangiel, “Sam Altman on ChatGPT’s First Two Years, Elon Musk and AI Under Trump,” Bloomberg, January 5, 2025, (<)a href='https://www.bloomberg.com/features/2025-sam-altman-interview'(>)https://www.bloomberg.com/features/2025-sam-altman-interview(<)/a(>). The mythology around AGI masks the shallowness of today’s AI models, providing substance and imagination that innovations are just around the corner.

If AGI Were Here,
How Would We Even Know?

Despite bold public claims from the tech industry that AGI is “as little as two years”18Lakshmi Varanasi, “Here’s How Far We Are from AGI, According to the People Developing It,” (<)em(>)Business Insider(<)/em(>), November 9, 2024, (<)a href='https://www.businessinsider.com/agi-predictions-sam-altman-dario-amodei-geoffrey-hinton-demis-hassabis-2024-11'(>)https://www.businessinsider.com/agi-predictions-sam-altman-dario-amodei-geoffrey-hinton-demis-hassabis-2024-11(<)/a(>). away, the research community has yet to agree.19See generally Sayash Kapoor and Arvind Narayanan, “AGI Is Not a Milestone,” (<)em(>)AI Snake Oil(<)/em(>), May 1, 2025, (<)a href='https://www.aisnakeoil.com/p/agi-is-not-a-milestone'(>)https://www.aisnakeoil.com/p/agi-is-not-a-milestone(<)/a(>). A recent survey by the Association for the Advancement of AI (AAAI) of nearly five hundred AI researchers found that 76 percent of respondents assert that scaling up current approaches to yield AGI is unlikely or very unlikely to succeed.20AAAI, “AAAI 2025 Presidential Panel.”
So how will we even know when AGI is here? The metrics currently on offer are largely narrow, vague, and self-serving benchmarks21Benji Edwards, “Elon Musk: AI Will Be Smarter than Any Human Around the End of Next Year,” (<)em(>)Ars Technica(<)/em(>), April 9, 2024, (<)a href='https://arstechnica.com/information-technology/2024/04/elon-musk-ai-will-be-smarter-than-any-human-around-the-end-of-next-year'(>)https://arstechnica.com/information-technology/2024/04/elon-musk-ai-will-be-smarter-than-any-human-around-the-end-of-next-year(<)/a(>) (claiming that AI capability increases by “a factor of 10 every year, if not every six to nine months”).—and some researchers have argued that the preoccupation with AGI is “supercharging bad science.”22Borhane Blili-Hamelin et al.,“Stop Treating ‘AGI’ as the North-Star Goal of AI Research,” (<)em(>)arXiv(<)/em(>) (2025), (<)a href='https://arxiv.org/abs/2502.03689'(>)arXiv:2502.03689(<)/a(>). In place of scientific breakthroughs, industry labs are hinging claims to proximity to AGI on grandiosely named tests like “Humanity’s Last Exam”23Center for AI Safety and Scale AI, “Humanity’s Last Exam,” accessed April 30, 2025, (<)a href='https://agi.safe.ai'(>)https://agi.safe.ai(<)/a(>). and “Frontier Math”24Epoch AI, “FrontierMath,” accessed April 30, 2025, (<)a href='https://epoch.ai/frontiermath'(>)https://epoch.ai/frontiermath(<)/a(>). that gauge only a very narrow ability to answer clear, closed-ended questions25Cf. Satya Nadella’s statement: “The real benchmark is: the world growing at 10 percent.” Victor Tangermann, “(<)a href='https://futurism.com/microsoft-ceo-ai-generating-no-value'(>)Microsoft CEO Admits That AI Is Generating Basically No Value(<)/a(>),” (<)em(>)Futurism(<)/em(>), February 22, 2025.—poor proxies for the promises these companies make about the capability of this technology like inventing cures to cancer or solving for climate change. AI company Hugging Face’s Chief Science Officer Thomas Wolf has argued we’re currently testing systems for their ability to be obedient students, rather than for their mastery of bold counterfactual approaches or the ability to challenge their own training data, which might show more promise for solving complex, intractable problems.26Thomas Wolf, “The Einstein AI Model,” blog, February 25, 2025, (<)a href='https://thomwolf.io/blog/scientific-ai.html'(>)https://thomwolf.io/blog/scientific-ai.html(<)/a(>). In 2025, a group of AI researchers from across academia and industry pointed to an endemic challenge within the current field of AI evaluations that is more preoccupied with “coarse-grained claims of general intelligence” than “real-world relevant measures of progress and performance.”27Weidinger, Raji, et al., “(<)a href='https://doi.org/10.48550/arXiv.2503.05336'(>)Toward an Evaluation Science for Generative AI Systems(<)/a(>),” (<)em(>)arXiv(<)/em(>), March 13, 2025.
In sum, there is a widespread and endemic lack of clarity on both the definition and time scales of the AGI conversation, which makes it hard to contest or reason its merits. The more urgent inquiry, however, is who and what does this disproportionate focus on AGI work in service of? How will it shape the current trajectory of AI?

Who Benefits from AGI Discourse?

AGI has become the argument to end all other arguments, a technological milestone that is both so abstract and absolute that it gains default priority over other means, and indeed, all other ends. It is routinely cast as a technology so all-powerful that it will overcome some of the most intractable challenges of our time—and that both investment into the sector and ancillary costs are justified by the future solutions it will offer us. For example, Eric Schmidt recently dismissed the climate costs imposed by AI by asserting that humans aren’t set up to coordinate to solve climate change. Thus, the reasoning goes, we need to supercharge data centers—because in the long term, AGI has the best shot at solving for it.28Prithvi Iyer, “Transcript: US Lawmakers Probe AI’s Role in Energy and Climate,” (<)em(>)Tech Policy(<)/em(>), April 11, 2025, (<)a href='https://www.techpolicy.press/transcript-us-lawmakers-probe-ais-role-in-energy-and-climate'(>)https://www.techpolicy.press/transcript-us-lawmakers-probe-ais-role-in-energy-and-climate(<)/a(>). This not only reflects abstract AI solutionism at its peak; it also serves to flatten and disguise the problem of climate change itself as waiting for its technical silver bullet, rendering the challenges of political will, international cooperation, and material support for people to rebuild homes or house climate refugees—everything it will take to meaningfully “solve” climate change—invisible.29See Eve Darian-Smith, “The Challenge of Political Will, Global Democracy and Environmentalism,” (<)em(>)Environmental Policy and Law(<)/em(>) 54, no. 2–3 (2024): 117–126, (<)a href='https://doi.org/10.3233/EPL-239023'(>)https://doi.org/10.3233/EPL-239023(<)/a(>); Alejandro de la Garza, “We Have the Technology to Solve Climate Change. What We Need Is Political Will,” (<)em(>)Time(<)/em(>), April 7, 2022, (<)a href='https://time.com/6165094/ipcc-climate-action-political-will'(>)https://time.com/6165094/ipcc-climate-action-political-will(<)/a(>).
Presenting AI as a quick technical fix to long-standing, structurally hard problems has been a consistent theme over the past decade (as we explore in our chapter on Consulting the Record), but past variants of technosolutionism at least had to demonstrate how the technology would solve the problem at hand. With AGI, though, we’re not clear how this transformation will happen beyond the assertion that the current state of affairs will be overhauled. The debates around DOGE transforming government using AI have this flavor: In his interview with Ben Buchanan, Ezra Klein speaks of the general sentiment that with superintelligent AI potentially around the corner, the government will inevitably need to be taken apart and rebuilt in the age of AGI.30Klein, “The Government Knows A.G.I. Is Coming.” It’s the same logic that dictates that if AGI is truly going to propel scientific discoveries of the kind that Amodei promises, then perhaps there will be no need for federal funding for science at all.

AGI’s Market-Boosting Function

Asserting that AGI is always on the horizon also has a crucial market-preserving function for large-scale AI: keeping the gas on investment in the resources and computing infrastructure that key industry players need to sustain this paradigm. As we’ve argued, this current avatar of large-scale AI was set in motion by the simple rule that scaling up data and compute would lead to performance advancements—a logic that sedimented the dominance of the handful of companies that already controlled access to these inputs, along with pathways to market,31Meredith Whittaker, “The Steep Cost of Capture,” (<)em(>)Interactions(<)/em(>) 28, no. 6 (2021): 50–55, (<)a href='https://doi.org/10.1145/3488666'(>)https://doi.org/10.1145/3488666(<)/a(>). and in whose hands power would be further concentrated if AGI ever were achieved.32Miles Brundage (@Miles_Brundage), “Per Standard AI Scaling Laws, a 2x Advantage in Compute Does Not Yield a 2x Advantage in Capabilities,” X, March 6, 2025, (<)a href='https://x.com/miles_brundage/status/1897568753178865900'(>)https://x.com/miles_brundage/status/1897568753178865900(<)/a(>). The quest for the ever-shifting goalpost of AGI only reinforces this “bitter lesson” (as Anthropic CEO Amodei calls it).33Amodei, “Machines of Loving Grace,” citing Rich Sutton, “The Bitter Lesson,” (<)em(>)Incomplete Ideas(<)/em(>), March 13, 2019, (<)a href='https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf'(>)https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf(<)/a(>).
There’s a lesson here from the 1980s, when, even before the term AGI was in vogue, the Reagan administration pushed for a wildly ambitious (for the time) “Strategic Computing Initiative” that was focused on propelling general advancements in “AI”—along the lines of the AGI promise.34Emma Salisbury, “A Cautionary Tale on Ambitious Feats of AI: The Strategic Computing Program,” (<)em(>)War on the Rocks(<)/em(>), May 22, 2020, (<)a href='https://warontherocks.com/2020/05/cautionary-tale-on-ambitious-feats-of-ai-the-strategic-computing-program'(>)https://warontherocks.com/2020/05/cautionary-tale-on-ambitious-feats-of-ai-the-strategic-computing-program(<)/a(>). It was propelled by the promise of new military capabilities, anxieties about Japanese domination on AI, and the potential of private-sector opportunities. A billion dollars in taxpayer money was spent then on a program, now universally acknowledged as a failure, that didn’t yield results even on the terms it set for itself. The postmortem of why it failed yields varied conclusions, but one is worth underscoring: Then, as now, these advancements hinged not on revolutionary feats in science, but on scaling up computing power and data.
Coincidentally, existential risk arguments often have the same effect: painting AI systems as all-powerful (when in reality they’re flawed) and feeding into the idea of an arms race in which the US must prevent China from getting access to these purportedly dangerous tools.35Alvin Wang Graylin and Paul Triolo, “There Can Be No Winners in a US-China AI Arms Race,” (<)em(>)MIT Technology Review(<)/em(>), January 21, 2025, (<)a href='https://www.technologyreview.com/2025/01/21/1110269/there-can-be-no-winners-in-a-us-china-ai-arms-race'(>)https://www.technologyreview.com/2025/01/21/1110269/there-can-be-no-winners-in-a-us-china-ai-arms-race(<)/a(>). We’ve seen these logics instrumented into increasingly aggressive export-control regimes. By drawing attention to the very systems they purportedly aim to contest, x-risk narratives create a Streisand effect: encouraging more people to see the AI dystopia in their present, fueling adoption and bolstering industry players rather than curbing their power. They have also narrowed the scope for policy intervention, bolstering a debate centered around the two poles of accelerationism and deceleration rather than facilitating a broad dialogue about AI development and its societal implications.
Ultimately, these twin myths around AGI position AI as powerful and worthy of investment, and draw attention away from the evidence to the contrary.

Displacing Grounded Expertise:
Who Is Disempowered by
the AGI Discourse?

Elevating AGI over other paths to solving hard problems is just a supercharged form of technosolutionism,36Evgeny Morozov, (<)em(>)To Save Everything, Click Here(<)/em(>) (PublicAffairs, 2014). but it also means that those with technical expertise—not only those driving the tech development but also those fluent in using this new suite of tools—are normalized as primary experts across broad areas of society and science in which they lack domain-specific context and experience.37In one telling example, researchers published a paper in (<)em(>)Nature(<)/em(>) claiming discovery of over forty novel materials using an AI-driven autonomous laboratory. Shortly afterward, two materials chemists critiqued the paper for failing to recognize systematic errors with unsupervised materials discovery. See Julia Robinson, “New Analysis Raises Doubts Over Autonomous Lab’s Materials ‘Discoveries’,” (<)em(>)Royal Society of Chemistry(<)/em(>), January 16, 2024, (<)a href='https://www.chemistryworld.com/news/new-analysis-raises-doubts-over-autonomous-labs-materials-discoveries/4018791.article'(>)https://www.chemistryworld.com/news/new-analysis-raises-doubts-over-autonomous-labs-materials-discoveries/4018791.article(<)/a(>); and Robert Palgrave (@Robert_Palgrave), “This exciting paper shows AI design of materials, robotic synthesis. 10s of new compounds in 17 days. But did they? This paper has very serious problems in materials characterisation. In my view it should never have got near publication. Hold on tight let’s take a look 😱,” X, November 30, 2023, (<)a href='https://x.com/Robert_Palgrave/status/1730358675523424344'(>)https://x.com/Robert_Palgrave/status/1730358675523424344(<)/a(>). This has been a familiar fight over the past decade of AI development: Those with lived experience and sector-specific knowledge have had to advocate for a determining role in questions around whether, and how, AI is deployed.
Whether that means nurses having a say in how AI is integrated in patient care, or parent groups fighting against the use of facial recognition on their children in the classroom, there has been a consistent push to recenter who is counted as an expert on baseline questions about AI integration. (Notably, some of this has often resulted in tokenistic approaches that provide nominal seats at the table to impacted communities—too little, too late.) AGI presents a more formidable version of this challenge given its abstract and absolutist form. For example, narratives around AGI upending the world of work routinely position workers across industries as being subjects—or worse, collateral damage—of a great transformation, rather than as participants and indeed experts in the ways in which these transitions will take place.38Samantha M. Kelly, “Elon Musk Says AI Will Take All Our Jobs,” (<)em(>)CNN(<)/em(>), May 23, 2024, (<)a href='https://www.cnn.com/2024/05/23/tech/elon-musk-ai-your-job/index.html'(>)https://www.cnn.com/2024/05/23/tech/elon-musk-ai-your-job/index.html(<)/a(>).