Related Publications [7]
New Report Analyzes Data Governance Policies
Aug 3, 2021
AI Now submits comments to the Australian Human Rights Commission on AI and human rights
Mar 13, 2020
AI Now’s comments to Canada’s Office of the Privacy Commissioner on data privacy law and AI
Mar 12, 2020
AI Now’s Testimony to the EU Parliament on the Dangers of Predictive Policing
Feb 21, 2020
Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing
Jan 3, 2020
Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force
Dec 4, 2019
Litigating Algorithms 2019 U.S. Report: New Challenges to Government Use of Algorithmic Decision Systems
Sep 17, 2019
Related Press [9]
The False Comfort of Human Oversight as an Antidote to A.I. Harm
Often expressed as the “human-in-the-loop” solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the “loop” of A.I. seems reassuring, this approach is instead “loopy” in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems.
Slate
Jun 15, 2021
All Is Fair In Mortgage Lending And Fintech?
As Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute explained to CBS News, “We turn to machine learning in the hopes that they'll be more objective, but really what they're doing is reflecting and amplifying historical patterns of discrimination and often in ways that are harder to see.”
Forbes
Aug 28, 2020
Big Tech juggles ethical pledges on facial recognition with corporate interests
"It shows that organizing and socially informed research works," said Meredith Whittaker, co-founder of the AI Now Institute, which researches the social implications of artificial intelligence. "But do I think the companies have really had a change of heart and will work to dismantle racist oppressive systems? No."
NBC News
Jun 22, 2020
Catholic leaders call for ethical guidelines around AI
Tech companies need binding, detailed policies that hold them accountable in addressing the many ethical concerns surrounding AI, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Feb 28, 2020
US lawmakers concerned by accuracy of facial recognition
Ms Whittaker said corporate interest should not be allowed to "race ahead" and incorporate this technology into their systems without safeguards.
BBC
Jan 16, 2020
The Technology 202: Facial recognition gets another look on Capitol Hill today from skeptical lawmakers
Expect a divided group: Whittaker will call for policymakers and businesses to halt all use of facial recognition in sensitive social situations and political contexts, according to a press release.
The Washington Post
Jan 15, 2020
AI poses risks, but the White House says regulators shouldn’t “needlessly hamper” innovation
“It will take time to assess how effective these principles are in practice, and we will be watching closely,” said Rashida Richardson, the director of policy research at the AI Now Institute. “Establishing boundaries for the federal government and the private sector around AI technology will offer greater insight to those of us working in the accountability space.”
Vox
Jan 8, 2020
New York City couldn’t pry open its own black box algorithms. So now what?
But Rashida Richardson, the policy research director of AI Now, says that position isn’t nearly enough, as the role has neither the mandate nor the power to reveal what automated systems are in use.
Vox
Dec 18, 2019
Facial recognition tech is a problem. Here’s how the Democratic candidates plan to tackle it.Bernie Sanders’s call to ban facial recognition tech for policing, explained
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
Vox
Aug 21, 2019
