The GOP bill that would unleash AI is getting closer to passing. AI Now’s Amba Kak says the plan keeps getting worse.
A wide swath of groups and lawmakers from across the political spectrum have raised alarms about the proposal to ban state-level regulations of AI for the next five years.
Hard Reset
Jul 1, 2025
Facing a Changing Industry, AI Activists Rethink Their Strategy
Wired
Jun 24, 2025
California seeks new guardrails on automated AI systems
In California, the state Senate has voted in favor of a so-called AI Bill of Rights, which would establish new guardrails around automated decision systems (ADS). To learn more about them, Marketplace’s Nova Safo spoke with Kate Brennan, associate director of the think tank AI Now Institute.
Marketplace
Jun 23, 2025
‘One Big Beautiful Bill’ could block AI regulations for 10 years, leaving its harms unchecked
"I can imagine that for lawmakers, Republican or Democrat, whose districts rely on BEAD funding for broadband access to their rural communities, it's really a strange bargain," Kak said.
PolitiFact
Jun 17, 2025
The Storm Clouds Looming Past the State Moratorium: Weak Regulation is as Bad as None
We must remain vigilant against a scenario that’s as harmful as no regulation itself: weak regulation that serves to legitimize the AI industry’s behavior and continue business as usual. A federal law that imposes baseline transparency disclosures and then restricts states’ ability to impose additional—or stricter—requirements could place us on a dangerous trajectory of inaction.
Tech Policy Press
Jun 11, 2025
Big AI isn’t just lobbying Washington—it’s joining it
On Tuesday, the AI Now Institute, a research and advocacy nonprofit that studies the social implications of AI, released a report that accused AI companies of “pushing out shiny objects to detract from the business reality while they desperately try to derisk their portfolios through government subsidies and steady public-sector (often carceral or military) contracts.” The organization says the public needs “to reckon with the ways in which today’s AI isn’t just being used by us, it’s being used on us.”
Fortune
Jun 6, 2025
NYC Book Launch: Empire of AI
AI Now Institute
May 30, 2025
Trump’s Big, Beautiful Handout to the AI Industry
The patchwork is hardly as daunting as Obernolte claims it is, argued Amba Kak, co-executive director of the AI Now Institute, a think tank that opposes commercial surveillance. The most sweeping state legislation that has actually passed — in California and Colorado — mostly addresses transparency about when AI is being used, she said. Laws in other states are designed to go after the worst-of-the-worst actors in the developing field, Kak said. Those laws target political “deepfakes,” AI “revenge porn,” and the use of AI by health insurance companies.
The Intercept
May 29, 2025
Report: How local governments can prioritize responsible AI adoption
Despite such challenges for regulation, local governments still have tools to help ensure they responsibly adopt and implement AI technology, according to a new report from the Local Progress Impact Lab and AI Now Institute.
Route Fifty
May 29, 2025
Expert Perspectives on 10-Year Moratorium on Enforcement of US State AI Laws
The recent proposal for a sweeping moratorium on all state AI-related legislation and enforcement flies in the face of common sense: We can’t treat the industry’s worst players with kid gloves while leaving everyday people, workers, and children exposed to egregious forms of harm. Industry claims that state laws are a “burdensome” “patchwork” of unwieldy and complex laws is not grounded in fact.
Tech Policy Press
May 23, 2025
AI can steal your voice, and there’s not much you can do about it
NBC News
May 23, 2025
AI Now Co-ED Amba Kak Testifies Against a Ten-Year Ban on State AI Enforcement Before the House Committee on Energy & Commerce
AI Now Institute
May 21, 2025
Californians would lose AI protections under bill advancing in Congress
It’s reasonable to interpret one of the exceptions to mean states like California could continue enforcing privacy law if this bill passed, said Amba Kak, codirector of The AI Now Institute, a research and equitable AI advocacy organization. But doing so is risky. “We can’t count on the fact that courts will see it this way, especially in the context of an otherwise sweeping moratorium with the clear intention to clamp down on AI-related enforcement,” she said.
CalMatters
May 16, 2025
US AI laws risk becoming more ‘European’ than Europe’s
At the state level, there is “incredible momentum” to fill the regulatory vacuum created by Washington’s inaction, according to Amba Kak, executive director of the AI Now Institute. States are determined to tackle the most “abhorrent, harmful and problematic” use cases of AI, she says.
Financial Times
May 15, 2025
New Report on the National Security Risks from Weakened AI Safety Frameworks
Heidy Khlaaf
Apr 21, 2025
Phase two of military AI has arrived
But the complexity of AI systems, which pull from thousands of pieces of data, make that a herculean task for humans, says Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and previously led safety audits for AI-powered systems. “‘Human in the loop’ is not always a meaningful mitigation,” she says.
MIT Technology Review
Apr 15, 2025
Generative AI is learning to spy for the US military
Khlaaf adds that even if humans are “double-checking” the work of AI, there's little reason to think they're capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to come to conclusions, “it wouldn't really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.
MIT Technology Review
Apr 11, 2025
DeepMind’s 145-page paper on AGI safety may not convince skeptics
Heidy Khlaaf, chief AI scientist at the nonprofit AI Now Institute, told TechCrunch that she thinks the concept of AGI is too ill-defined to be “rigorously evaluated scientifically.
TechCrunch
Apr 2, 2025