Related Publications [1]
Related Press [15]
How AI safety took a backseat to military money
AI firms are now working with weapons makers and the military. Safety expert Heidy Khlaaf breaks down what that means.
The Verge
Sep 25, 2025
She’s investigating the safety and security of AI weapons systems.
Besides being error-prone, there’s another big problem with large language models in these kinds of situations: They’re vulnerable to being compromised, which could allow adversaries to hijack systems and impact military decisions. Despite these known issues, militaries all over the world are increasingly using AI—an alarming reality that now drives the work of pioneering AI safety researcher Heidy Khlaaf.
MIT Technology Review
Sep 8, 2025
Experts worry about transparency, unforeseen risks as DOD forges ahead with new frontier AI projects
“We’ve particularly warned before that commercial models pose a much more significant safety and security threat than military purpose-built models, and instead this announcement has disregarded these known risks and boasts about commercial use as an accelerator for AI, which is indicative of how these systems have clearly not been appropriately assessed,” Khlaaf explained.
DefenseScoop
Aug 4, 2025
The A.I. Cold War
"The push to integrate A.I. products everywhere grants A.I. companies power that goes beyond financial incentives, enabling them to concentrate power in a way we've never seen before," Dr. Heidy Khlaaf, chief A.I. scientist at the A.I. Now Institute, told me.
Puck News
Jul 29, 2025
How the White House AI plan helps, and hurts, in the race against China
Sarah Myers West, co-executive director of the AI Now Institute, told Defense One that the new action plan “amounts to a workaround” of that failed provision. “The action plan, at its highest level, reads just like a wish list from Silicon Valley,” she said.
Defense One
Jul 23, 2025
Musk’s xAI was a late addition to the Pentagon’s set of $200 million AI contracts, former defense employee says
The Pentagon’s use of commercial LLMs has drawn some criticism, in part because AI models are generally trained on enormous sets of data that may include personal information on the open web. Mixing that information with military applications is too risky, said Sarah Myers West, a co-executive director of the AI Now Institute, a research organization.
NBC News
Jul 22, 2025
Big Tech enters the war business: How Silicon Valley is becoming militarized
“We argue that this is simply a cover for these companies to concentrate even more power and funding,” says Heidy Khlaaf, chief AI scientist at the AI Now Institute, a research center focused on the societal consequences of AI. Presenting themselves as protagonists of a quasi-civilizational crusade protects tech companies from “regulatory friction,” branding any call for accountability as “a detriment to national interests.” And it allows them to position themselves “not only as too big, but also as too strategically important to fail,” reads a recent AI Now Institute report.
El País
Jul 21, 2025
What Open AI Doesn’t Want You to Know
AI companies are spending millions to get the laws they want. They're not trying to cure cancer, or save America. These companies want to make $100 billion overnight, and they're willing to sponsor dangerous laws to make it happen.
More Perfect Union
Jul 2, 2025
Big AI isn’t just lobbying Washington—it’s joining it
On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of “pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.” The organization says the public needs “to reckon with the ways in which today’s AI isn’t just being used by us, it’s being used on us.”
Fortune
Jun 6, 2025
DeepMind’s 145-page paper on AGI safety may not convince skeptics
Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be “rigorously evaluated scientifically.
TechCrunch
Apr 2, 2025
As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Associated Press
Feb 18, 2025
As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Associated Press
Feb 18, 2025
The Rush to A.I. Threatens National Security
"In the quest for supremacy in a purported technological arms race, it would be unwise to overlook the risks that A.I.’s current reliance on sensitive data poses to national security or to ignore its core technical vulnerabilities."
The New York Times
Jan 27, 2025
AI admin tools pose a threat to national security
Artificial intelligence is already being used on the battlefield. Accelerated adoption is in sight. This year, Meta, Anthropic and OpenAI all declared that their AI foundation models were available for use by US national security. AI warfare is controversial and widely criticised. But a more insidious set of AI use cases have already been quietly integrated into the US military.
Dec 24, 2024
Meta says it’s making its Llama models available for US national security applications
According to a recent study from the nonprofit AI Now Institute, the AI deployed today for military intelligence, surveillance, and reconnaissance poses dangers because it relies on personal data that can be exfiltrated and weaponized by adversaries.
TechCrunch
Dec 24, 2024