The “common sense” around artificial intelligence has become potent over the past two years, imbuing the technology with a sense of agency and momentum that make the current trajectory of AI appear inevitable, and certainly essential for economic prosperity and global dominance for the US. In this section, we break down the narratives propping up this “inevitability,” explaining why it is particularly challenging—but still necessary—to contest the current trajectory of AI, especially at this moment in global history.

There has been a swift and aggressive narrative attack on AI regulation as anti-innovation, superfluous bureaucracy, and unnecessary friction. We’ve seen a total reversal in the US federal stance and, increasingly, a regulatory chill reverberating across quarters in the EU. We saw early signs towards the end of Biden’s term setting the government’s primary role as enabler of the AI industry,1Jake Sullivan, “Remarks by APNSA Jake Sullivan on AI and National Security,” (speech, National Defense University, Washington, D.C., October 24, 2024), (<)a href='https://bidenwhitehouse.archives.gov/briefing-room/speeches-remarks/2024/10/24/remarks-by-apnsa-jake-sullivan-on-ai-and-national-security'(>)https://bidenwhitehouse.archives.gov/briefing-room/speeches-remarks/2024/10/24/remarks-by-apnsa-jake-sullivan-on-ai-and-national-security(<)/a(>). and with the Trump Administration it is the headlining message. The headwinds against baseline accountability against the tech sector in general, and AI companies in particular, are greater than ever. 

The tech industry’s fickle policy promises have also revealed their true colors. Companies spent 2023 insisting they were extremely concerned about safety and were firmly “pro-regulation.”2(<)em(>)Written Testimony of Sam Altman, Before the U.S. Senate Committee on the Judiciary Subcommittee on Privacy, Technology, & the Law(<)/em(>) (2023) (Sam Altman, Chief Executive Officer, OpenAI). But as the center of power has shifted towards a deregulatory current, any superficial consensus on guardrails has just as quickly fallen away. OpenAI’s CEO Sam Altman, for instance, went from testifying in a Congressional hearing that regulation is “essential” to lobbying against a minor safety provision in just fifteen months.

The government’s narrative change has been just as swift. In 2023, future-looking existential (“x-risk”) concerns took center stage. In policy fights these x-risk safety concerns have often eclipsed the long list of material harms arising from corporate AI control, often moving public and policy attention away from enacting policy and enforcing existing laws on the books to hold companies accountable.3Laurie Clarke, “How Silicon Valley Doomers are Shaping Rishi Sunak’s AI Plans,” (<)em(>)Politico(<)/em(>), September 14, 2023, (<)a href='https://www.politico.eu/article/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism'(>)https://www.politico.eu/article/rishi-sunak-artificial-intelligence-pivot-safety-summit-united-kingdom-silicon-valley-effective-altruism(<)/a(>). Notably, Vice President Harris’s speech on the sidelines of the UK AI Safety Summit called out this tension explicitly, and set up an (implicit) counterpoint to the x-risk-dominated agenda at the rest of the summit led by former prime minister Rishi Sunak: “These [existential] threats, without question, are profound, and they demand global action. But let us be clear. There are additional threats that also demand our action—threats that are currently causing harm and which, to many people, also feel existential.”4Kamala Harris, “Remarks by Vice President Harris on the Future of Artificial Intelligence” (speech, London, United Kingdom, November 1, 2023), (<)a href='https://bidenwhitehouse.archives.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom'(>)https://bidenwhitehouse.archives.gov/briefing-room/speeches-remarks/2023/11/01/remarks-by-vice-president-harris-on-the-future-of-artificial-intelligence-london-united-kingdom(<)/a(>). Harris went on to describe the ways in which ordinary people have already been harmed from faulty, discriminatory, and inaccurate AI systems.

Unlike other regulatory conversations, the broad philanthropic and government interest in addressing x-risk safety concerns eventually served to further cement government relationships with the tech industry. The vast majority of efforts under the safety umbrella have been voluntary and industry-led—for example, numerous safety validation standards within the UK and US AI Safety Institutes were set by or done in collaboration with industry players like Scale AI5The Scale Team, “Scale AI Partnering with the U.S. AI Safety Institute to Evaluate AI Models,” (<)em(>)Scale(<)/em(>), February 10, 2025, (<)a href='https://scale.com/blog/first-independent-model-evaluator-for-the-USAISI'(>)https://scale.com/blog/first-independent-model-evaluator-for-the-USAISI(<)/a(>). and Anthropic6NIST, “U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation with Anthropic and OpenAI,” August 29, 2024, (<)a href='https://www.nist.gov/news-events/news/2024/08/us-ai-safety-institute-signs-agreements-regarding-ai-safety-research'(>)https://www.nist.gov/news-events/news/2024/08/us-ai-safety-institute-signs-agreements-regarding-ai-safety-research(<)/a(>).—revealing that the government had been successfully convinced to regulate AI in lockstep with and led by industry-centered expertise. On the other hand, when the rubber met the road with SB 1047, the California bill that sought to impose baseline documentation and review requirements on the largest AI companies for a very narrow class of advanced models, large parts of the tech industry pulled out the rug and pushed against even this narrow regulatory intervention with all their might.7Wes Davis, “OpenAI Exec Says California’s AI Safety Bill Might Slow Progress” (<)em(>)The Verge(<)/em(>), August 21, 2025, (<)a href='https://www.theverge.com/2024/8/21/24225648/openai-letter-california-ai-safety-bill-sb-1047'(>)https://www.theverge.com/2024/8/21/24225648/openai-letter-california-ai-safety-bill-sb-1047(<)/a(>). Even Anthropic—which positions itself as a company responsive to safety and the risks of AI—waffled on SB 1047 support, first coming out against the bill before dragging their feet into a hedged statement of support, saying the “benefits likely outweigh its costs,” but “we are not certain of this.”8Dario Amodei to Gavin Newsom, August 21, 2024, (<)a href='https://cdn.sanity.io/files/4zrzovbb/website/6a3b14a98a781a6b69b9a3c5b65da26a44ecddc6.pdf'(>)https://cdn.sanity.io/files/4zrzovbb/website/6a3b14a98a781a6b69b9a3c5b65da26a44ecddc6.pdf(<)/a(>). Government players fell in line, with key Democratic legislators9Rachael Myrow, “Pelosi Blasts California AI Bill Heading to Newsom’s Desk as ‘Ill-Informed’” KQED, August 29, 2024, (<)a href='https://www.kqed.org/news/12002254/california-bill-to-regulate-catastrophic-effects-of-ai-heads-to-newsoms-desk'(>)https://www.kqed.org/news/12002254/california-bill-to-regulate-catastrophic-effects-of-ai-heads-to-newsoms-desk(<)/a(>). framing the bill as detrimental to innovation.10Zoe Lofgren et al. to Gavin Newsom, August 15, 2024, (<)a href='https://democrats-science.house.gov/imo/media/doc/2024-08-15%20to%20Gov%20Newsom_SB1047.pdf'(>)https://democrats-science.house.gov/imo/media/doc/2024-08-15%20to%20Gov%20Newsom_SB1047.pdf(<)/a(>). In a letter to Governor Newsom, eight Democratic members of Congress succinctly summed up this position: “In short, we are very concerned about the effect this legislation could have on the innovation economy of California.”11Lofgren et al. to Newsom. Facing immense pressure, Governor Newsom ultimately vetoed the bill. 

The fight for SB 1047 opened the floodgates for pitting regulation against innovation. A recent one-two punch has shifted the terrain entirely: Groups advocating for legislation mirroring SB 1047’s provisions are being politically targeted by Republicans12U.S. Senate Committee on Commerce, Science, & Transportation, “Sen. Cruz Investigates AI Nonprofit for Potential Misuse of Taxpayer Funds,” April 7, 2025, (<)a href='https://www.commerce.senate.gov/2025/4/sen-cruz-investigates-ai-nonprofit-for-potential-misuse-of-taxpayer-funds'(>)https://www.commerce.senate.gov/2025/4/sen-cruz-investigates-ai-nonprofit-for-potential-misuse-of-taxpayer-funds(<)/a(>). and a new troubling bill, SB 813,13Cal. S.B. 813. Reg. Sess. 2025-2026, amended in Senate March 26, 2025, (<)a href='https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB813'(>)https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB813(<)/a(>). is gaining support in California that allows AI firms to self-certify their models as safe and then use that certification as a legal shield to avoid liability in a civil action for harm.14Chase Difeliciantonio, Tyler Katzenberger, and Christine Mui, “Voluntary AI Rules are Getting Critics to Yes,” (<)em(>)Politico(<)/em(>), April 21, 2025, (<)a href='https://www.politico.com/newsletters/politico-technology-california-decoded-preview/2025/04/21/voluntary-ai-rules-are-getting-critics-to-yes-00300551'(>)https://www.politico.com/newsletters/politico-technology-california-decoded-preview/2025/04/21/voluntary-ai-rules-are-getting-critics-to-yes-00300551(<)/a(>).

At the federal level, there was vanishingly little progress legislatively, leaving large swaths of industry use entirely outside of regulatory constraints. Biden’s now-repealed EO15“Executive Order 14110 of October 30, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” (<)em(>)Code of Federal Regulations(<)/em(>), title 88 (2023): 75191-75226, (<)a href='https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence'(>)https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence(<)/a(>). and the OMB memo16White House, “Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence,” October 24, 2025, (<)a href='https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security'(>)https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2024/10/24/memorandum-on-advancing-the-united-states-leadership-in-artificial-intelligence-harnessing-artificial-intelligence-to-fulfill-national-security-objectives-and-fostering-the-safety-security(<)/a(>). were bright spots, making strong progress in terms of hooks for actionable accountability via targeting government use of and procurement of AI. Even public investment proposals such as the National AI Research Resource pilot, originally positioned as a counterforce to concentrated power and resources in the AI industry, was recast under Biden’s 2024 National Security Memo as a national competitiveness project. Former National Security Advisor Jake Sullivan’s October 2024 speech before the National Defense University also firmly positioned the US government as an enabler of frontier AI companies and emphasized the need for US investment in the AI sector to go full steam ahead in order to shore up the country’s strategic positioning against China.17Jake Sullivan, “Remarks by APNSA Jake Sullivan on AI and National Security” (speech, National Defense University, Washington, D.C., October 24, 2024).

Still, despite a far-from-coherent policy stance on AI under Biden, the attack on regulation ushered in by the Trump administration cannot be overstated.18Coral Davenport, “Inside Trump’s Plan to Halt Hundreds of Regulations,” (<)em(>)New York Times(<)/em(>), April 16, 2025, (<)a href='https://www.nytimes.com/2025/04/15/us/politics/trump-doge-regulations.html'(>)https://www.nytimes.com/2025/04/15/us/politics/trump-doge-regulations.html(<)/a(>). Since being elected, President Trump has positioned regulation as a clear-cut way for the US to “lose” the global arms race, and his allies have propagated fears of Chinese control of global AI infrastructure as a threat to American security and democracy. On his first day in office, Trump gutted Biden’s Executive Order on AI, replacing it with his own Executive Order set to revoke existing federal AI policies that “act as barriers to American AI innovation.”19White House, “Removing Barriers to American Leadership in Artificial Intelligence,” January 23, 2025, (<)a href='https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence'(>)https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence(<)/a(>). At a series of high-profile events including Davos, the French AI Action Summit, and the Munich Security Conference, the Trump administration’s message rang loud and clear: Global regulation is a targeted economic attack on US companies, and the antithesis to innovation. Meanwhile, the administration has expressly targeted the administrative state, calling into question the independent status of enforcement agencies and gutting the federal workforce, including key employees tasked with enforcing existing laws to rein in corporate dominance (this included unlawfully firing key Democratic FTC Commissioners with a record on tech enforcement). The Trump administration’s recent OMB memos do little to impose accountability on AI systems, and are instead designed to fast-track the procurement of AI across the federal government.20See Madison Alder, “Trump White House Releases Guidance for AI Use, Acquisition in Government,” FedScoop, April 4, 2025, (<)a href='https://fedscoop.com/trump-white-house-ai-use-acquisition-guidance-government'(>)https://fedscoop.com/trump-white-house-ai-use-acquisition-guidance-government(<)/a(>); and Ellen P. Goodman, “Accelerating AI in the US Government, Evaluating the Trump OMB Memo,” (<)em(>)Tech Policy Press(<)/em(>), April 24, 2025, (<)a href='https://www.techpolicy.press/accelerating-ai-in-the-us-government-evaluating-the-trump-omb-memo'(>)https://www.techpolicy.press/accelerating-ai-in-the-us-government-evaluating-the-trump-omb-memo(<)/a(>).

Meanwhile, AI Industrial policy—or financial and regulatory support for expanding the national AI industry—is being positioned as the counterpoint to regulation, and a more appropriate role for government intervention. Unsurprisingly, Silicon Valley tech and AI executives have fallen21Elon Musk (@elonmusk), “America is a Nation of Builders Soon, You Will Be Free to Build,” X, November 5, 2024, (<)a href='https://x.com/elonmusk/status/1854023551575322959'(>)https://x.com/elonmusk/status/1854023551575322959(<)/a(>). quickly22Shaun Maguire (@shaunmmaguire), “It’s Time to Build [American Flag Emoji] Renaissance”, X, November 6, 2024, (<)a href='https://x.com/shaunmmaguire/status/1854049676544340174'(>)https://x.com/shaunmmaguire/status/1854049676544340174(<)/a(>). into23Marc Andreessen (@pmarca), “Fuck Yes. The Romance of Production is Back.” X, November 7, 2024, (<)a href='https://x.com/pmarca/status/1854476136560132300'(>)https://x.com/pmarca/status/1854476136560132300(<)/a(>). line, shoring up their seats at the table. Because, while Trump’s tangible industrial AI policy moves remain to be seen, the dominos set in motion by the Biden administration are poised to rapidly accelerate under Trump. 

Trump’s agenda for global AI dominance is mutually reinforced by an expansive energy dominance agenda, and his administration has repeatedly highlighted the need to expand US energy resources24White House, “Declaring a National Energy Emergency,” January 20, 2025, (<)a href='https://www.whitehouse.gov/presidential-actions/2025/01/declaring-a-national-energy-emergency'(>)https://www.whitehouse.gov/presidential-actions/2025/01/declaring-a-national-energy-emergency(<)/a(>). to remain competitive in AI.25Spencer Kimball, “Trump Says He Will Approve Power Plants for AI Through Emergency Declaration,” NBC Philadelphia, January 23, 2025, (<)a href='https://www.nbcphiladelphia.com/news/business/money-report/trump-says-he-will-approve-power-plants-for-ai-through-emergency-declaration/4086845'(>)https://www.nbcphiladelphia.com/news/business/money-report/trump-says-he-will-approve-power-plants-for-ai-through-emergency-declaration/4086845(<)/a(>). Debates about permitting requirements for infrastructure build-out had already taken center stage during the Biden administration. Senator Joe Manchin’s Energy Permitting Reform Act of 2024 expediting review procedures for energy and mineral projects advanced out of committee with a bipartisan vote.26Energy Permitting Reform Act of 2024, S.4753, 118th Cong. (2024), (<)a href='https://www.congress.gov/bill/118th-congress/senate-bill/4753'(>)https://www.congress.gov/bill/118th-congress/senate-bill/4753(<)/a(>). The bill is supported by a coalition of fossil fuel companies and tech lobbyists, who claim that AI tech innovation is tied to energy expansion. As they wrote in a letter to Congress: “America’s leadership in global innovation depends on the passage of permitting reforms that allow the US to build critical energy infrastructure.”27Americans for Responsible Innovation (ARI) et al. to Charles Schumer, Mitch McConnell, Mike Johnson, and Hakeem Jeffries, November 12, 2024, (<)a href='https://responsibleinnovation.org/wp-content/uploads/2024/11/Coalition-Letter-Tech-Leaders-Support-Manchin-Barrasso.pdf'(>)https://responsibleinnovation.org/wp-content/uploads/2024/11/Coalition-Letter-Tech-Leaders-Support-Manchin-Barrasso.pdf(<)/a(>). 

In some ways, the Trump administration’s pro-enforcement posture toward Big Tech companies—seen in the continuation of the DOJ’s case against Google and the FTC’s recent trial against Meta—is consistent with the Biden administration’s antitrust policies, and runs orthogonal to the otherwise deregulatory headwinds and hands-off approach to the tech industry. At the same time, these cases are not designed to strike at the root of power facing the AI industry, which has received an “all systems go” message from the Trump White House, but rather to curtail Big Tech censorship and undermine platform authority over state power. Already we see tech companies attempt to wield political favor to end the trials.28Dana Mattioli, Rebecca Balhaus, Josh Dawsey, “Inside Mark Zuckerberg’s Failed Negotiations to End Antitrust Case,” (<)em(>)Wall Street Journal(<)/em(>), April 15, 2025, (<)a href='https://www.wsj.com/us-news/law/mark-zuckerberg-meta-antitrust-ftc-negotiations-a53b3382'(>)https://www.wsj.com/us-news/law/mark-zuckerberg-meta-antitrust-ftc-negotiations-a53b3382(<)/a(>); Brendan Bordelon and Gabby Miller, “‘Just Chaos’: How Trump’s White House Could Swing the War on Big Tech,” (<)em(>)Politico(<)/em(>), April 20, 2025, (<)a href='https://www.politico.com/news/2025/04/20/google-antitrust-trial-trump-00299586'(>)https://www.politico.com/news/2025/04/20/google-antitrust-trial-trump-00299586(<)/a(>). And Google is set to argue that structural separation will undermine US national security issues,29Brendan Bordelon and Gabby Miller, “It’s Breakup Season for Tech in Washington,” (<)em(>)Politico PRO Morning Tech(<)/em(>), April 18, 2025, (<)a href='https://subscriber.politicopro.com/newsletter/2025/04/its-breakup-season-for-tech-in-washington-00298120'(>)https://subscriber.politicopro.com/newsletter/2025/04/its-breakup-season-for-tech-in-washington-00298120(<)/a(>). potentially derailing bold antitrust remedies from the court. Despite these cases, it is unlikely that the Trump DOJ and FTC are set to broadly undermine the AI industry’s market power as a matter of policy, no matter how the antitrust suits are decided.30For example, the previous FTC administration investigated the relationship between cloud service providers and AI developers. Federal Trade Commission, (<)em(>)Partnerships Between Cloud Service Providers and AI Developers(<)/em(>), by Office of Technology Staff (2025), (<)a href='https://web.archive.org/web/20250118211330/https://www.ftc.gov/system/files/ftc_gov/pdf/p246201_aipartnerships6breport_redacted_0.pdf'(>)https://web.archive.org/web/20250118211330/https://www.ftc.gov/system/files/ftc_gov/pdf/p246201_aipartnerships6breport_redacted_0.pdf(<)/a(>).

The drift toward deregulation has begun even in the European Union, traditionally seen as a staunch regulatory power. Driven by rightward electoral shifts, increasing securitization of AI, and new geopolitical realities driven by Trump, the once proudly proclaimed digital regulation agenda is now seen as a liability by European policymakers. In addition to scrapping planned bills, such as the AI Liability Directive that created a product liability framework for AI,31Caitlin Andrews, “European Commission Withdraws AI Liability Directive From Consideration,” IAPP, February 12, 2025, (<)a href='https://iapp.org/news/a/european-commission-withdraws-ai-liability-directive-from-consideration'(>)https://iapp.org/news/a/european-commission-withdraws-ai-liability-directive-from-consideration(<)/a(>). there is appetite in the high halls of EU policymaking to walk back on rules already agreed to. While backtracking is constrained by the embarrassing optics of bending under US pressure—at least thus far—when it comes to implementation, there is growing pressure to create as much flexibility as possible so as to mute the impact of the laws without changing their letter.32Daniel Mügge and Leevi Saari, “The EU AI Policy Pivot: Adaptation or Capitulation?” (<)em(>)Tech Policy Press(<)/em(>), February 25, 2025, (<)a href='https://www.techpolicy.press/the-eu-ai-policy-pivot-adaptation-or-capitulation'(>)https://www.techpolicy.press/the-eu-ai-policy-pivot-adaptation-or-capitulation(<)/a(>). This push to create flexibility for domestic companies is complicated by the importance of these rules as a rare source of leverage in the nascent trade war between the EU and the US.33Jacob Parry, Camille Gus, and Francesca Micheletti, “EU Set to Fine Apple and Meta Amid Escalating Trade War,” (<)em(>)Politico(<)/em(>), March 31, 2025, (<)a href='https://www.politico.eu/article/eu-set-fine-apple-meta-amid-escalating-trade-war'(>)https://www.politico.eu/article/eu-set-fine-apple-meta-amid-escalating-trade-war(<)/a(>). The extent to which European digital regulation becomes a pawn in this debate remains to be seen. 

More generally, the tone in the European Union and member states has become more enabling, paralleling the developments elsewhere. French President Emmanuel Macron’s “plug, baby, plug” quip at the Paris Action Summit crystallized this shift in sentiment.34Clea Caulcutt, “‘Plug, Baby, Plug’: Macron Pushes for French Nuclear-Powered AI,” (<)em(>)Politico(<)/em(>), February 10, 2025, (<)a href='https://www.politico.eu/article/emmanuel-macron-answer-donald-trump-fossil-fuel-drive-artificial-intelligence-ai-action-summit/'(>)https://www.politico.eu/article/emmanuel-macron-answer-donald-trump-fossil-fuel-drive-artificial-intelligence-ai-action-summit(<)/a(>). Leveraging the tools of statecraft and existing infrastructures (such as abundant nuclear energy in France) toward promoting the development of AI is increasingly central to the broader push toward European sovereignty. In addition to new public investments in AI infrastructures, new political coalitions and power players are also emerging in the background to facilitate this change. A recent large public-private partnership with an investment pledge of €150 billion by a collective of leading European industrial giants and tech companies, complemented by direct access to heads of European states to discuss a “drastically simplified regulatory framework for AI,” is one example of these changing winds.35Anna Desmarais, “Here’s What Has Been Announced at the AI Action Summit,” (<)em(>)Euronews(<)/em(>), February 2, 2025, (<)a href='https://www.euronews.com/next/2025/02/11/heres-what-has-been-announced-at-the-ai-action-summit'(>)https://www.euronews.com/next/2025/02/11/heres-what-has-been-announced-at-the-ai-action-summit(<)/a(>). 

Absent from this discussion is the role regulation can play in fostering innovation within markets, particularly given the dynamism and complexity that AI exhibits. By creating a stable regulatory environment with robust competition among firms and an equal playing field that enables new entrants to thrive, well-crafted regulation can act as an enabler rather than an adversary to innovation in emerging markets (See Chapter 4: A Roadmap for Action).