Related Publications [3]
New AI Now Paper Highlights Risks of Commercial AI Used In Military Contexts
Oct 22, 2024
Safety and War: Safety and Security Assurance of Military AI Systems
Jun 25, 2024
The Algorithmically Accelerated Killing Machine
Jan 24, 2024
Related Press [14]
As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Associated Press
Feb 18, 2025
As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies
“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”
Associated Press
Feb 18, 2025
The Rush to A.I. Threatens National Security
"In the quest for supremacy in a purported technological arms race, it would be unwise to overlook the risks that A.I.’s current reliance on sensitive data poses to national security or to ignore its core technical vulnerabilities."
The New York Times
Jan 27, 2025
AI admin tools pose a threat to national security
Artificial intelligence is already being used on the battlefield. Accelerated adoption is in sight. This year, Meta, Anthropic and OpenAI all declared that their AI foundation models were available for use by US national security. AI warfare is controversial and widely criticised. But a more insidious set of AI use cases have already been quietly integrated into the US military.
Dec 24, 2024
Meta says it’s making its Llama models available for US national security applications
According to a recent study from the nonprofit AI Now Institute, the AI deployed today for military intelligence, surveillance, and reconnaissance poses dangers because it relies on personal data that can be exfiltrated and weaponized by adversaries.
TechCrunch
Dec 24, 2024
U.S. Military Makes First Confirmed OpenAI Purchase for War-Fighting Forces
“It is extremely alarming that they’re explicit in OpenAI tool use for ‘unified analytics for data processing’ to align with USAFRICOM’s mission objectives,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute, who has previously conducted safety evaluations for OpenAI.
The Intercept
Nov 15, 2024
White House Urges Agencies to Adopt AI for Military, Spy Use
Responding to the new strategy, Sarah Myers West, co-executive director of the AI Now Institute, a policy research center, warned against rushing to adopt the technology across many different domains, arguing for time-tested approaches to ensure safety. She said there’s a risk that in life-and-death decisions, human military operators may defer to the recommendations of AI systems.
Bloomberg
Nov 15, 2024
Researchers sound alarm on dual-use AI for defense
In the paper, published by the AI Now Institute, authors Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker point out the growing interest in commercial AI models—sometimes called foundation models—for military purposes, particularly for intelligence, surveillance, target acquisition, and reconnaissance. They argue that while concerns around AI’s potential for chemical, biological, radiological, and nuclear weapons dominate the conversation, big commercial foundation models already pose underappreciated risks to civilians.
Defense One
Nov 15, 2024
The AI industry is pushing a nuclear power revival — partly to fuel itself
“I want to see innovation in this country,” Myers West said. “I just want the scope of innovation to be determined beyond the incentive structures of these giant companies.”
NBC News
Nov 15, 2024
The AI industry is pushing a nuclear power revival — partly to fuel itself
“If you were to integrate large language models, GPT-style models into search engines, it’s going to cost five times as much environmentally as standard search,” said Sarah Myers West, managing director of the AI Now Institute, a research group focused on the social impacts of AI. At current growth rates, some new AI servers could soon gobble up more than 85 terawatt hours of electricity each year, researchers have estimated — more than some small nations’ annual energy consumption.
NBC News
Mar 7, 2024
OpenAI quietly deletes ban on using ChatGPT for “military and warfare”
“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission.
The Intercept
Jan 12, 2024
What the OpenAI drama means for AI progress — and safety
“The push to retain dominance is leading to toxic competition. It’s a race to the bottom,” says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.
Nature
Nov 23, 2023
The AI safety summit, and its critics
Amba Kak of the AI Now Institute, one of the few representatives of civil society at last week’s summit, said at the event’s conclusion that “we are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions.”
Politico
Nov 8, 2023
Global leaders commit to pre-deployment AI safety testing
bal leaders commit to pre-deployment AI safety testing AI Now Institute executive director Amba Kak, one of three civil society representatives at the summit table, praised pre-deployment testing commitments, but warned, "We are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions."
Axios
Nov 3, 2023