The one question everyone should be asking after OpenAI’s deal with the Pentagon

“In terms of safety guardrails for ‘high-stake decisions’ or surveillance, the existing guardrails for generative AI are deeply lacking, and it has been shown how easily compromised they are, intentionally or inadvertently,” Heidy Khlaaf, the chief AI scientist at the nonprofit AI Now Institute, told me. “It’s highly doubtful that if they cannot guard their systems against benign cases, they’d be able to do so for complex military and surveillance operations.”

Vox

Mar 6, 2026

Key Questions on the Role of Technology in the Expanding Middle East War

Tech Policy Press asked experts working at the intersection of technology policy, security, and international affairs to share what they are watching as the situation unfolds.

Tech Policy Press

Mar 6, 2026

AI company Anthropic amends core safety principle amid growing competition in sector

But Heidy Khlaaf, chief AI scientist at independent research group the AI Now Institute, says despite Anthropic’s safety-first reputation, it has always fallen short when it comes to its attempts to prevent human harm.

CBC

Feb 27, 2026

Can India be a “third way” AI alternative to the U.S. and China?

In this context, the India summit, with its impact-oriented framing and calls to internationalism and “leadership of the Global South,”offers fertile terrain to build resistance to the status quo, and to stitch together the burgeoning national and local efforts that collectively represent a people-centered alternative. Can this summit genuinely be a moment for challenging how power is distributed globally in the AI economy?

Rest of World

Feb 11, 2026

What Are the Implications if the AI Boom Turns to Bust?

This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. 

Tech Policy Press

Jan 13, 2026

Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants

“The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE's strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” Heidy Khlaaf, head AI scientist at the AI Now Institute, told 404 Media.

404 Media

Dec 4, 2025

Is AI being shoved down your throat at work? Here’s how to fight back.

“There’s a whole range of different examples where unions have been able to really be on the front foot in setting the terms for how AI gets used — and whether it gets used at all,” Sarah Myers West, co-executive director of the AI Now Institute, told me recently.

Vox

Nov 16, 2025

Power Companies Are Using AI To Build Nuclear Power Plants

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

404 Media

Nov 14, 2025

You May Already Be Bailing Out the AI Business

Is an artificial-intelligence bubble about to pop? The question of whether we’re in for a replay of the 2008 housing collapse—complete with bailouts at taxpayers’ expense—has saturated the news cycle. For every day that passes without disaster, AI companies can more persuasively insist that no such market correction is coming. But the federal government is already bailing out the AI industry with regulatory changes and public funds that will protect companies in the event of a private sector pullback.

The Wall Street Journal

Nov 13, 2025

The fusing of AI firms and the state is leading to a dangerous concentration of power

With all of this in mind, Hard Reset spoke with researcher Sarah West, the co-executive director of a think tank advocating for an AI that benefits the public interest, not just a select few. We discuss this consolidation of power among a few AI players—and how the government is actually hindering the development of healthier competition and consumer-friendly AI products, while flirting with financial disaster.

Hard Reset

Oct 31, 2025

The Destruction in Gaza Is What the Future of AI Warfare Looks Like

“AI systems, and generative AI models in particular, are notoriously flawed with high error rates for any application that requires precision, accuracy, and safety-criticality,” Dr. Heidy Khlaaf, chief AI scientist at the AI Now Institute, told Gizmodo. “AI outputs are not facts; they’re predictions. The stakes are higher in the case of military activity, as you’re now dealing with lethal targeting that impacts the life and death of individuals.”

Gizmodo

Oct 31, 2025

ChatGPT safety systems can be bypassed to get weapons instructions

“That OpenAI’s guardrails are so easily tricked illustrates why it’s particularly important to have robust pre-deployment testing of AI models before they cause substantial harm to the public,” said Sarah Meyers West, a co-executive director at AI Now, a nonprofit group that advocates for responsible and ethical AI usage.

NBC News

Oct 31, 2025

The Rise and Fall of Nvidia’s Geopolitical Strategy

China’s Cyberspace Administration last month banned companies from purchasing Nvidia’s H20 chips, much to the chagrin of its CEO Jensen Huang. This followed a train wreck of events that unfolded over the summer.

Tech Policy Press

Oct 31, 2025

Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?

For Heidy Khlaaf, the chief AI scientist at the AI Now Institute with a background in nuclear safety, Anthropic’s promise that Claude won’t help someone build a nuke is both a magic trick and security theater. She says that a large language model like Claude is only as good as its training data. And if Claude never had access to nuclear secrets to begin with, then the classifier is moot.

Oct 20, 2025

How AI safety took a backseat to military money

AI firms are now working with weapons makers and the military. Safety expert Heidy Khlaaf breaks down what that means.

The Verge

Sep 25, 2025

ASML-Mistral AI: It’s the Geopolitics, Stupid

While subsidies and an EU Chips Act have failed to move the needle, this deal is a blueprint for something better: It plays to Europe’s existing strengths, shows there are alternatives to what AI researcher Leevi Saari calls the “voracious pressures” of US venture capital and strengthens EU suppliers.

Bloomberg

Sep 11, 2025

Decision in US vs. Google Gets it Wrong on Generative AI

Gesturing towards the importance of generative AI in the search engine market then dismissing its actual effects is a dangerous precedent. It is true that tech markets are being shaped by generative AI. But in this case the court failed to accurately examine the broader AI market and the effects of consolidated power.

Tech Policy Press

Sep 11, 2025

AI is costing jobs, but not always the way you think

Demand for AI is strong, but there’s no guarantee this gamble will pay off according to Sarah Myers West at the AI Now Institute.

Marketplace

Sep 9, 2025

  1  /  15