Related Publications [3]

New AI Now Paper Highlights Risks of Commercial AI Used In Military Contexts

Oct 22, 2024

Safety and War: Safety and Security Assurance of Military AI Systems

Jun 25, 2024

The Algorithmically Accelerated Killing Machine

Jan 24, 2024

Related Press [15]

U.S. Military Makes First Confirmed OpenAI Purchase for War-Fighting Forces

“It is extremely alarming that they’re explicit in OpenAI tool use for ‘unified analytics for data processing’ to align with USAFRICOM’s mission objectives,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute, who has previously conducted safety evaluations for OpenAI.

The Intercept

Nov 15, 2024

White House Urges Agencies to Adopt AI for Military, Spy Use

Responding to the new strategy, Sarah Myers West, co-executive director of the AI Now Institute, a policy research center, warned against rushing to adopt the technology across many different domains, arguing for time-tested approaches to ensure safety. She said there’s a risk that in life-and-death decisions, human military operators may defer to the recommendations of AI systems.

Bloomberg

Nov 15, 2024

Researchers sound alarm on dual-use AI for defense

In the paper, published by the AI Now Institute, authors Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker point out the growing interest in commercial AI models—sometimes called foundation models—for military purposes, particularly for intelligence, surveillance, target acquisition, and reconnaissance. They argue that while concerns around AI’s potential for chemical, biological, radiological, and nuclear weapons dominate the conversation, big commercial foundation models already pose underappreciated risks to civilians.

Defense One

Nov 15, 2024

The AI industry is pushing a nuclear power revival — partly to fuel itself

“I want to see innovation in this country,” Myers West said. “I just want the scope of innovation to be determined beyond the incentive structures of these giant companies.”

NBC News

Nov 15, 2024

The AI industry is pushing a nuclear power revival — partly to fuel itself

“If you were to integrate large language models, GPT-style models into search engines, it’s going to cost five times as much environmentally as standard search,” said Sarah Myers West, managing director of the AI Now Institute, a research group focused on the social impacts of AI. At current growth rates, some new AI servers could soon gobble up more than 85 terawatt hours of electricity each year, researchers have estimated — more than some small nations’ annual energy consumption.

NBC News

Mar 7, 2024

OpenAI quietly deletes ban on using ChatGPT for “military and warfare”

“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said Sarah Myers West, managing director of the AI Now Institute and a former AI policy analyst at the Federal Trade Commission.

The Intercept

Jan 12, 2024

What the OpenAI drama means for AI progress — and safety

“The push to retain dominance is leading to toxic competition. It’s a race to the bottom,” says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.

Nature

Nov 23, 2023

The AI safety summit, and its critics

Amba Kak of the AI Now Institute, one of the few representatives of civil society at last week’s summit, said at the event’s conclusion that “we are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions.”

Politico

Nov 8, 2023

Global leaders commit to pre-deployment AI safety testing

bal leaders commit to pre-deployment AI safety testing AI Now Institute executive director Amba Kak, one of three civil society representatives at the summit table, praised pre-deployment testing commitments, but warned, "We are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions."

Axios

Nov 3, 2023

Global leaders commit to pre-deployment AI safety testing

AI Now Institute executive director Amba Kak, one of three civil society representatives at the summit table, praised pre-deployment testing commitments, but warned, "We are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions."

Axios

Nov 3, 2023

U.K.’s AI Safety Summit Ends With Limited, but Meaningful, Progress

“There has been a complete industry capture of this conversation, and in many ways this summit reflects that,” says Amba Kak, the executive director of the AI Now Institute, a research group.

Time

Nov 2, 2023

U.K.’s AI Safety Summit Ends With Limited, but Meaningful, Progress

“There has been a complete industry capture of this conversation, and in many ways this summit reflects that,” says Amba Kak, the executive director of the AI Now Institute, a research group. “The context to all of this is that we’re seeing a further concentration of power in the tech industry and, within that, a handful of actors. And if we let industry set the tone on AI policy, it’s not enough to say we want regulation—because we’re going to see regulation that further entrenches industry interests.”

Time

Nov 2, 2023

Stop talking about tomorrow’s AI doomsday when AI poses risks today

Letters written by tech-industry leaders are “essentially drawing boundaries around who counts as an expert in this conversation”, says Amba Kak, director of the AI Now Institute in New York City, which focuses on the social consequences of AI.

Nature

Jun 27, 2023

The AI apocalypse: Imminent risk or misdirection?

Discussions about artificial intelligence (AI) have quickly turned from the excited to the apocalyptic. Are warnings that AI could pose an existential threat valid, or do they distract from the real danger AI is already causing?

Al Jazeera

Jun 10, 2023

Does artificial intelligence pose the risk of human extinction?

Tech industry leaders issue a warning as governments consider how to regulate AI without stifling innovation.

Al Jazeera

Jun 1, 2023