Related Publications [4]

General Purpose AI Poses Serious Risks, Should Not Be Excluded From the EU’s AI Act | Policy Brief
Apr 13, 2023

ChatGPT And More: Large Scale AI Models Entrench Big Tech Power
Apr 11, 2023

Algorithmic Accountability for the Public Sector – Report
Aug 17, 2021

New Report Analyzes Data Governance Policies
Aug 3, 2021
Related Press [12]
Generative AI risks concentrating Big Tech’s power. Here’s how to stop it.
If regulators don’t act now, the generative AI boom will concentrate Big Tech’s power even further. That’s the central argument of a new report from research institute AI Now.
Apr 18, 2023
Opinion | Competition authorities need to move fast and break up AI
Unless regulators act, Big Tech’s dominance over the digital economy will be cemented
Financial Times
Apr 17, 2023
Finally, a realistic roadmap for getting AI companies in check
In a new report, the AI Now Institute — a research center studying the social implications of artificial intelligence — offers a roadmap that specifies exactly which steps policymakers can take. It’s refreshingly pragmatic and actionable, thanks to the government experience of authors Amba Kak and Sarah Myers West.
Vox
Apr 12, 2023
The Technology 202: Former FTC advisors urge swift action to counteract ‘AI hype’
Authored by two former advisers to FTC Chair Lina Khan, Amba Kak and Sarah Myers West, the paper offers a glimpse into how key tech regulators may respond to Silicon Valley’s soaring interest in AI
The Washington Post
Apr 11, 2023
AI Isn’t Omnipotent. It’s Janky. – an interview with Amba Kak
Everybody’s talking about futuristic risks, the singularity, existential risk. They’re distracting from the fact that the thing that really scares these companies is regulation. Regulation today.
The Atlantic
Apr 3, 2023
500 Top Technologists and Elon Musk Demand Immediate Pause of Advanced AI Systems
"But here’s what’s key to understand about Chat GPT and other similar large language models: they’re not in any way actually reflecting the depth of understanding of human language—they’re mimicking its form.”
Gizmodo
Mar 29, 2023
As A.I. Booms, Lawmakers Struggle to Understand the Technology
“The picture in Congress is bleak,” said Amba Kak, the executive director of the AI Now Institute, a nonprofit research center, who recently advised the F.T.C. “The stakes are high because these tools are used in very sensitive social domains like in hiring, housing and credit, and there is real evidence that over the years, A.I. tools have been flawed and biased.”
The New York Times
Mar 3, 2023
The Mimic: How will ChatGPT change everyday life? Sarah Myers West on artificial intelligence and human resilience.
West expects this latest version to affect certain industries significantly—but, she says, as uncanny as ChatGPT’s emulation of human writing is, it’s still capable of only a very small subset of functions previously requiring human intelligence. This isn’t to say it represents no meaningful problems, but to West, its future—and the future of artificial intelligence altogether—will be shaped less by the technology itself and more by what humanity does to determine its use.
The Signal
Jan 17, 2023
FTC Chair Khan Brings on AI Policy Advice From NYU Researchers
"Khan announced Friday that NYU’s Meredith Whittaker, Amba Kak, and Sarah Myers West are joining an AI strategy group within the FTC’s Office of Policy Planning.
Bloomberg Law
Nov 19, 2021
The False Comfort of Human Oversight as an Antidote to A.I. Harm
Often expressed as the “human-in-the-loop” solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the “loop” of A.I. seems reassuring, this approach is instead “loopy” in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems.
Slate
Jun 15, 2021
All Is Fair In Mortgage Lending And Fintech?
As Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute explained to CBS News, “We turn to machine learning in the hopes that they'll be more objective, but really what they're doing is reflecting and amplifying historical patterns of discrimination and often in ways that are harder to see.”
Forbes
Aug 28, 2020
Big Tech juggles ethical pledges on facial recognition with corporate interests
"It shows that organizing and socially informed research works," said Meredith Whittaker, co-founder of the AI Now Institute, which researches the social implications of artificial intelligence. "But do I think the companies have really had a change of heart and will work to dismantle racist oppressive systems? No."
NBC News
Jun 22, 2020