Nurses Sound Alarm as ‘Uber for Nursing’ Apps Push to Deregulate Healthcare

A new AI Now Institute report published April 21, 2026, warns that gig-work platforms marketed as "Uber for nursing" are aggressively lobbying states to rewrite healthcare staffing rules, a push that could leave nurses with less pay, fewer protections, and less control over their shifts, according to The Guardian.

nurse.org

Apr 23, 2026

‘Uber for nurses’: gig-work apps lobby to deregulate healthcare, report finds

Billion-dollar tech platforms are aggressively pushing for deregulation of the “Uber for nursing” industry in an effort to expand gig work in the healthcare sector, according to a report published on Tuesday.

The Guardian

Apr 21, 2026

‘Safety first’ puts Anthropic ahead in game of AI spin

But Dr Heidy Khlaaf, chief AI scientist at the AI Now Institute and a former OpenAI safety engineer, is sceptical. She notes Anthropic provides no comparison with existing automated security tools, nor any false-positive rates. “It also serves their ‘safety first’ image, as they’re able to justify the lack of public release, even a limited one for independent evaluation, as a public service – when it simply obscures experts’ abilities to independently validate their

The Observer

Apr 12, 2026

The Great AI Grift

Tech leaders want you to believe that AI is the key to a new golden age. The reality looks more like a bold, government-backed heist.

The Nation

Apr 10, 2026

AI Giants Go on Charm Offensive to Avert Public Backlash

But broad skepticism and fear about the impact of AI have made opposing all regulation untenable for tech company CEOs, said Kak, who is co-executive director of the AI Now Institute, which has advocated for AI regulation. If they can’t oppose every policy, “What’s the next best move?” she asked. “It’s to place yourself in the driver’s seat, and that is what every single one of them is doing.”

The Wall Street Journal

Apr 7, 2026

U.S. military is using AI to help plan Iran air attacks, sources say, as lawmakers call for oversight

“It’s very dangerous that ‘speed’ is somehow being sold to us as strategic here, when it’s really a cover for indiscriminate targeting when you consider how inaccurate these models are,” Khlaaf said.

NBC News

Mar 11, 2026

The one question everyone should be asking after OpenAI’s deal with the Pentagon

“In terms of safety guardrails for ‘high-stake decisions’ or surveillance, the existing guardrails for generative AI are deeply lacking, and it has been shown how easily compromised they are, intentionally or inadvertently,” Heidy Khlaaf, the chief AI scientist at the nonprofit AI Now Institute, told me. “It’s highly doubtful that if they cannot guard their systems against benign cases, they’d be able to do so for complex military and surveillance operations.”

Vox

Mar 6, 2026

Iran and AI on the battlefield

Today we're talking about AI military capabilities, how companies like Anthropic and OpenAI have become, or are on their way to becoming deeply enmeshed in the military. And what happens when these companies and governments start building systems that help decide who lives and who dies in a war. I'm joined today by Heidy Khlaaf. She is the chief AI scientist at the AI Now Institute and an expert on AI safety within defence and national security, including in autonomous weapons systems.

CBC

Mar 6, 2026

Can Anthropic’s AI Claude be trusted in combat? | The Take

Can Anthropic’s AI Claude be trusted in combat?| The Take Tools from Anthropic and OpenAI are being used by the Pentagon to make military decisions in Iran, guiding decisions could cost lives. Fast, powerful, or flawed, how have AI systems already changed how wars are fought?

Al Jazeera

Mar 6, 2026

Key Questions on the Role of Technology in the Expanding Middle East War

Tech Policy Press asked experts working at the intersection of technology policy, security, and international affairs to share what they are watching as the situation unfolds.

Tech Policy Press

Mar 6, 2026

AI company Anthropic amends core safety principle amid growing competition in sector

But Heidy Khlaaf, chief AI scientist at independent research group the AI Now Institute, says despite Anthropic’s safety-first reputation, it has always fallen short when it comes to its attempts to prevent human harm.

CBC

Feb 27, 2026

Anthropic loosens safety pledge to compete with its AI peers

Core to Anthropic’s safety effort had been a pledge called the responsible scaling policy, said Sarah Myers West, co-executive director of the AI Now Institute. “If they believe that the capabilities of these tools outstrip their ability to control them and ensure that they’re safe, they would stop building them,” she said of the policy.

Marketplace

Feb 25, 2026

Can India be a “third way” AI alternative to the U.S. and China?

In this context, the India summit, with its impact-oriented framing and calls to internationalism and “leadership of the Global South,”offers fertile terrain to build resistance to the status quo, and to stitch together the burgeoning national and local efforts that collectively represent a people-centered alternative. Can this summit genuinely be a moment for challenging how power is distributed globally in the AI economy?

Rest of World

Feb 11, 2026

What Are the Implications if the AI Boom Turns to Bust?

This episode considers whether today’s massive AI investment boom reflects real economic fundamentals or an unsustainable bubble, and how a potential crash could reshape AI policy, public sentiment, and narratives about the future that are embraced and advanced not only by Silicon Valley billionaires, but also by politicians and governments. 

Tech Policy Press

Jan 13, 2026

Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants

“The claims being made on these slides are quite concerning, and demonstrate an even more ambitious (and dangerous) use of AI than previously advertised, including the elimination of human intervention. It also cements that it is the DOE's strategy to use generative AI for nuclear purposes and licensing, rather than isolated incidents by private entities,” Heidy Khlaaf, head AI scientist at the AI Now Institute, told 404 Media.

404 Media

Dec 4, 2025

Is AI being shoved down your throat at work? Here’s how to fight back.

“There’s a whole range of different examples where unions have been able to really be on the front foot in setting the terms for how AI gets used — and whether it gets used at all,” Sarah Myers West, co-executive director of the AI Now Institute, told me recently.

Vox

Nov 16, 2025

Power Companies Are Using AI To Build Nuclear Power Plants

Both Guerra and Khlaaf are proponents of nuclear energy, but worry that the proliferation of LLMs, the fast tracking of nuclear licenses, and the AI-driven push to build more plants is dangerous. “Nuclear energy is safe. It is safe, as we use it. But it’s safe because we make it safe and it’s safe because we spend a lot of time doing the licensing and we spend a lot of time learning from the things that go wrong and understanding where it went wrong and we try to address it next time,” Guerra said.

404 Media

Nov 14, 2025

You May Already Be Bailing Out the AI Business

Is an artificial-intelligence bubble about to pop? The question of whether we’re in for a replay of the 2008 housing collapse—complete with bailouts at taxpayers’ expense—has saturated the news cycle. For every day that passes without disaster, AI companies can more persuasively insist that no such market correction is coming. But the federal government is already bailing out the AI industry with regulatory changes and public funds that will protect companies in the event of a private sector pullback.

The Wall Street Journal

Nov 13, 2025

  1  /  16