Related Publications [6]

Biometric Surveillance Is Quietly Expanding: Bright-Line Rules Are Key

Apr 11, 2023

A New AI Lexicon: Recognition

Aug 16, 2021

Regulating Biometrics: Taking stock of a rapidly changing landscape

Sep 23, 2020

Regulating Biometrics: Global Approaches and Open Questions

Sep 1, 2020

AI Now’s Testimony to the House Oversight Committee

Jan 15, 2020

Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force

Dec 4, 2019

Related Press [10]

Tech Companies are Training AI to Read Your Lips

“This is about actively creating a technology that can be put to harmful uses rather than identifying and mitigating vulnerabilities in existing technology,” Sarah Myers West, a researcher for the AI Now Institute, told Motherboard.

Vice

Jun 14, 2021

Smile for the camera: the dark side of China’s emotion-recognition tech

“If anything, research and investigative reporting over the last few years have shown that sensitive personal information is particularly dangerous when in the hands of state entities, especially given the wide ambit of their possible use by state actors.”

The Guardian

Mar 3, 2021

AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute

In a new report called “Regulating Biometrics: Global Approaches and Urgent Questions,” the AI Now Institute says regulation advocates are beginning to believe a biometric surveillance state is not inevitable.

VentureBeat

Sep 4, 2020

US lawmakers concerned by accuracy of facial recognition

Ms Whittaker said corporate interest should not be allowed to "race ahead" and incorporate this technology into their systems without safeguards.

BBC

Jan 16, 2020

The Technology 202: Facial recognition gets another look on Capitol Hill today from skeptical lawmakers

Expect a divided group: Whittaker will call for policymakers and businesses to halt all use of facial recognition in sensitive social situations and political contexts, according to a press release.

The Washington Post

Jan 15, 2020

Researchers criticize AI software that predicts emotions

A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions.

Reuters

Dec 12, 2019

Emotion-detecting tech should be restricted by law – AI Now

The AI Now Institute says the field is "built on markedly shaky foundations". Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices.

BBC

Dec 12, 2019

Activists want Congress to ban facial recognition. So they scanned lawmakers’ faces.

Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.

Vox

Nov 15, 2019

San Francisco banned facial recognition tech. Here’s why other cities should too.

Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a new report from the AI Now Institute explains.

Vox

May 16, 2019

Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”

Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads.

The Intercept

Dec 6, 2018