You May Already Be Bailing Out the AI Business

Is an artificial-intelligence bubble about to pop? The question of whether we’re in for a replay of the 2008 housing collapse—complete with bailouts at taxpayers’ expense—has saturated the news cycle. For every day that passes without disaster, AI companies can more persuasively insist that no such market correction is coming. But the federal government is already bailing out the AI industry with regulatory changes and public funds that will protect companies in the event of a private sector pullback.

The Wall Street Journal

Nov 13, 2025

The fusing of AI firms and the state is leading to a dangerous concentration of power

With all of this in mind, Hard Reset spoke with researcher Sarah West, the co-executive director of a think tank advocating for an AI that benefits the public interest, not just a select few. We discuss this consolidation of power among a few AI players—and how the government is actually hindering the development of healthier competition and consumer-friendly AI products, while flirting with financial disaster.

Hard Reset

Oct 31, 2025

The Destruction in Gaza Is What the Future of AI Warfare Looks Like

“AI systems, and generative AI models in particular, are notoriously flawed with high error rates for any application that requires precision, accuracy, and safety-criticality,” Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, told Gizmodo. “AI outputs are not facts; they’re predictions. The stakes are higher in the case of military activity, as you’re now dealing with lethal targeting that impacts the life and death of individuals.”

Gizmodo

Oct 31, 2025

ChatGPT safety systems can be bypassed to get weapons instructions

“That OpenAI’s guardrails are so easily tricked illustrates why it’s particularly important to have robust pre-deployment testing of AI models before they cause substantial harm to the public,” said Sarah Meyers West, a co-executive director at AI Now, a nonprofit group that advocates for responsible and ethical AI usage.

NBC News

Oct 31, 2025

The Rise and Fall of Nvidia’s Geopolitical Strategy

China’s Cyberspace Administration last month banned companies from purchasing Nvidia’s H20 chips, much to the chagrin of its CEO Jensen Huang. This followed a train wreck of events that unfolded over the summer.

Tech Policy Press

Oct 31, 2025

AI Now’s Partnership and Strategy Lead Alli Finn Testifies at the Philadelphia City Council Committee on Technology and Information Services

AI Now Institute

Oct 14, 2025

AI Now Co-ED Amba Kak Gives Remarks Before the UN General Assembly on AI Governance

AI Now Institute

Sep 26, 2025

How AI safety took a backseat to military money

AI firms are now working with weapons makers and the military. Safety expert Heidy Khlaaf breaks down what that means.

The Verge

Sep 25, 2025

ASML-Mistral AI: It’s the Geopolitics, Stupid

While subsidies and an EU Chips Act have failed to move the needle, this deal is a blueprint for something better: It plays to Europe’s existing strengths, shows there are alternatives to what AI researcher Leevi Saari calls the “voracious pressures” of US venture capital and strengthens EU suppliers.

Bloomberg

Sep 11, 2025

Decision in US vs. Google Gets it Wrong on Generative AI

Gesturing towards the importance of generative AI in the search engine market then dismissing its actual effects is a dangerous precedent. It is true that tech markets are being shaped by generative AI. But in this case the court failed to accurately examine the broader AI market and the effects of consolidated power.

Tech Policy Press

Sep 11, 2025

AI is costing jobs, but not always the way you think

Demand for AI is strong, but there’s no guarantee this gamble will pay off according to Sarah Myers West at the AI Now Institute.

Marketplace

Sep 9, 2025

She’s investigating the safety and security of AI weapons systems.

Besides being error-prone, there’s another big problem with large language models in these kinds of situations: They’re vulnerable to being compromised, which could allow adversaries to hijack systems and impact military decisions. Despite these known issues, militaries all over the world are increasingly using AI—an alarming reality that now drives the work of pioneering AI safety researcher Heidy Khlaaf.

MIT Technology Review

Sep 8, 2025

What’s the real cost of chasing AGI? Power consolidation is just the start, says the AI Now Institute.

On today’s episode of Equity, Rebecca Bellan caught up with Amba Kak and Dr. Sarah Myers West from the AI Now Institute, a think tank focused on the social implications of AI and the consolidation of power in the tech industry. Their recent report, dubbed Artificial Power, lays out the political economy driving today’s AI frenzy and what’s at stake for everyone else.

TechCrunch

Sep 3, 2025

Statement from Kate Brennan, Associate Director at the AI Now Institute on the remedy decision in US v. Google

AI Now Institute

Sep 3, 2025

Is the AI Bubble Too Big to Fail?

“We’re now locked into a particular version of the market and the future where all roads lead to big tech,” says Amba Kak, co-executive director of the AI Now Institute, which studies AI development and policy. Indeed, the success of major stock indexes—and perhaps your 401(k)—is resting on the continued growth of AI: Meta, Amazon, and the chipmakers Nvidia and Broadcom have accounted for 60 percent of the S&P 500’s returns this year.  

Inc.

Aug 28, 2025

AI Now Chief AI Scientist Dr. Heidy Khlaaf Included on Time 100 AI 2025 List

AI Now Institute

Aug 28, 2025

Did Sam Altman Accidentally Admit That the AI Bubble Is Here?

Did Sam Altman Accidentally Admit That the AI Bubble Is Here? The economics seem fuzzy at best, but the largest AI companies are unlikely to take a financial hit because of U.S. government subsidies, says Amba Kak, co-executive director of the AI Now Institute, a policy organization.

Inc.

Aug 12, 2025

Experts worry about transparency, unforeseen risks as DOD forges ahead with new frontier AI projects

“We’ve particularly warned before that commercial models pose a much more significant safety and security threat than military purpose-built models, and instead this announcement has disregarded these known risks and boasts about commercial use as an accelerator for AI, which is indicative of how these systems have clearly not been appropriately assessed,” Khlaaf explained.


DefenseScoop

Aug 4, 2025

  1  /  18