Activists want Congress to ban facial recognition. So they scanned lawmakers’ faces.
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
Vox
Nov 15, 2019
The Apple Card algo issue: What you need to know about A.I. in everyday life
“These systems are being trained on data that’s reflective of our wider society,” West said. “Thus, AI is going to reflect and really amplify back past forms of inequality and discrimination.”
CNBC
Nov 14, 2019
In AI, Diversity Is A Business Imperative
So how do we address the need for diversity and prevent bias? New York University's AI Now Institute report suggests that in addition to hiring a more diverse group of candidates, companies must be more transparent about pay and discrimination and harassment reports, among other practices, to create an atmosphere that will welcome a more diverse group of people.
Forbes
Nov 14, 2019
2019 Symposium
The fourth annual AI Now Symposium provided behind-the-scenes insights from those at the frontlines of the growing pushback against harmful AI. Our program featured leading lawyers, organizers, scholars, and tech workers, all of whom have engaged creative strategies to combat exploitative AI systems across a wide range of contexts, from automated allocation of social services, to policing and border control, to worker surveillance and exploitation, and well beyond.
AI Now Institute
Oct 2, 2019
AI at work: Machines are training human workers to be more compassionate
A study published by New York University's AI Now Institute in April shows how many AI systems favor white people and males.
Tech Xplore
Aug 23, 2019
AI Vulnerabilities: An Integral Look at Security for AI
This workshop convened both practitioners from the more traditional software security world, and those developing AI systems, researchers studying AI security and vulnerability, and researchers examining the social and ethical implications of AI. AI systems can be exploited by a variety of triggers, from outright adversarial attacks, to bugs, to simple misapplication. Given the proliferation of AI systems through sensitive social domains it is important to understand not simply where and how such vulnerabilities occur (and our capacity to predict such vulnerabilities) throughout the “stack,” but what their implications might be within a given context of use. This workshop explored the link between technical research and practice examining security and vulnerability, and research looking at AI bias and fairness. Our hope was to expand the current discussion of security and bias to account for AI's technical capabilities and failings, how these might manifest in harm when exploited in practice, and what cross-sectoral efforts might be needed to contend with these risks.
AI Now Institute
Aug 23, 2019
Facial recognition tech is a problem. Here’s how the Democratic candidates plan to tackle it.Bernie Sanders’s call to ban facial recognition tech for policing, explained
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
Vox
Aug 21, 2019
Flawed Algorithms Are Grading Millions of Students’ Essays
“If the immediate feedback you’re giving to a student is going to be biased, is that useful feedback? Or is that feedback that’s also going to perpetuate discrimination against certain communities?” Sarah Myers West, a postdoctoral researcher at the AI Now Institute, told Motherboard.
Vice
Aug 20, 2019
The Phony Patriots of Silicon Valley
Meredith Whittaker, a co-founder of the AI Now Institute at New York University and a former Google employee, characterized the tech industry’s scaremongering about China as a tactical move meant to deflect criticism.
The New York Times
Aug 12, 2019
How Amazon will take over your house
On paper, Amazon is giving out cool stuff for free. But the company is also getting "extremely inexpensive access to record some of the most intimate parts of your life," says Meredith Whittaker, co-founder of the AI Now Institute.
Axios
Jul 31, 2019
The Racist History Behind Facial Recognition
Customers have used “affect recognition” for everything from measuring how people react to ads to helping children with autism develop social and emotional skills, but a report from the A.I. Now Institute argues that the technology is being “applied in unethical and irresponsible ways.”
The New York Times
Jul 10, 2019
This AI Software Is ‘Coaching’ Customer Service Workers. Soon It Could Be Bossing You Around, Too
Meredith Whittaker, a distinguished research scientist at New York University and co-director of the AI Now Institute, argues that Cogito could become another example of AI software that’s difficult for people to understand, but ends up having a massive impact on their lives regardless.
Time
Jul 8, 2019
Artificial Intelligence Makes Boosting ID Theft Protection Critical, House AI Task Force Chair Warns
“AI systems have evidenced a persistent pattern of gender and race-based discrimination. I have yet to encounter an AI system that was biased against white men,” Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Forbes
Jun 27, 2019
Alphabet’s Annual Meeting Draws Protests on Multitude of Issues
Google engineer Irene Knapp spoke in favor of a proposal to tie executive compensation to the company’s progress on diversity and inclusion. Knapp cited research by the group AI Now showing that bias in artificial intelligence technology is related to the lack of diversity in the industry.
Bloomberg
Jun 19, 2019
Exposing the Bias Embedded in Tech
The other panel member, Meredith Whittaker, a founder and a director of the AI Now Institute at New York University, noted that voice recognition tools that rely on A.I. often don’t recognize higher-pitched voices.
The New York Times
Jun 17, 2019
Google walkout organizer leaves company after claims of retaliation
Whittaker said she was told that, in order to stay at the company, she would have to “abandon” her work on AI ethics and her role at AI Now Institute, a research center she cofounded at New York University.
ABC News
Jun 7, 2019
Advertisers want to mine your brain
"The people who are able to apply and market these technologies are large brands and corporations," says Meredith Whittaker, co-founder of the AI Now Institute. "This is not an equal-access set of technologies. They're used by some on others."
Axios
Jun 3, 2019
San Francisco banned facial recognition tech. Here’s why other cities should too.
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a new report from the AI Now Institute explains.
Vox
May 16, 2019