Related Publications [3]

New AI Now Paper Highlights Risks of Commercial AI Used In Military Contexts
Oct 22, 2024

Safety and War: Safety and Security Assurance of Military AI Systems
Jun 25, 2024

The Algorithmically Accelerated Killing Machine
Jan 24, 2024
Related Press [14]
What Open AI Doesn’t Want You to Know
AI companies are spending millions to get the laws they want. They're not trying to cure cancer, or save America. These companies want to make $100 billion overnight, and they're willing to sponsor dangerous laws to make it happen.
More Perfect Union
Jul 2, 2025
Big AI isn’t just lobbying Washington—it’s joining it
On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of “pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.” The organization says the public needs “to reckon with the ways in which today’s AI isn’t just being used by us, it’s being used on us.”
Fortune
Jun 6, 2025
DeepMind’s 145-page paper on AGI safety may not convince skeptics
Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be “rigorously evaluated scientifically.
TechCrunch
Apr 2, 2025
As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Associated Press
Feb 18, 2025
As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Associated Press
Feb 18, 2025
The Rush to A.I. Threatens National Security
"In the quest for supremacy in a purported technological arms race, it would be unwise to overlook the risks that A.I.’s current reliance on sensitive data poses to national security or to ignore its core technical vulnerabilities."
The New York Times
Jan 27, 2025
AI admin tools pose a threat to national security
Artificial intelligence is already being used on the battlefield. Accelerated adoption is in sight. This year, Meta, Anthropic and OpenAI all declared that their AI foundation models were available for use by US national security. AI warfare is controversial and widely criticised. But a more insidious set of AI use cases have already been quietly integrated into the US military.
Dec 24, 2024
Meta says it’s making its Llama models available for US national security applications
According to a recent study from the nonprofit AI Now Institute, the AI deployed today for military intelligence, surveillance, and reconnaissance poses dangers because it relies on personal data that can be exfiltrated and weaponized by adversaries.
TechCrunch
Dec 24, 2024
U.S. Military Makes First Confirmed OpenAI Purchase for War-Fighting Forces
“It is extremely alarming that they’re explicit in OpenAI tool use for ‘unified analytics for data processing’ to align with USAFRICOM’s mission objectives,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute, who has previously conducted safety evaluations for OpenAI.
The Intercept
Nov 15, 2024
White House Urges Agencies to Adopt AI for Military, Spy Use
Responding to the new strategy, Sarah Myers West, co-executive director of the AI Now Institute, a policy research center, warned against rushing to adopt the technology across many different domains, arguing for time-tested approaches to ensure safety. She said there’s a risk that in life-and-death decisions, human military operators may defer to the recommendations of AI systems.
Bloomberg
Nov 15, 2024
Researchers sound alarm on dual-use AI for defense
In the paper, published by the AI Now Institute, authors Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker point out the growing interest in commercial AI models—sometimes called foundation models—for military purposes, particularly for intelligence, surveillance, target acquisition, and reconnaissance. They argue that while concerns around AI’s potential for chemical, biological, radiological, and nuclear weapons dominate the conversation, big commercial foundation models already pose underappreciated risks to civilians.
Defense One
Nov 15, 2024
The AI industry is pushing a nuclear power revival — partly to fuel itself
“I want to see innovation in this country,” Myers West said. “I just want the scope of innovation to be determined beyond the incentive structures of these giant companies.”
NBC News
Nov 15, 2024
The AI industry is pushing a nuclear power revival — partly to fuel itself
“If you were to integrate large language models, GPT-style models into search engines, it’s going to cost five times as much environmentally as standard search,” said Sarah Myers West, managing director of the AI Now Institute, a research group focused on the social impacts of AI. At current growth rates, some new AI servers could soon gobble up more than 85 terawatt hours of electricity each year, researchers have estimated — more than some small nations’ annual energy consumption.
NBC News
Mar 7, 2024
OpenAI quietly deletes ban on using ChatGPT for “military and warfare”
“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission.
The Intercept
Jan 12, 2024