Related Publications [4]

Atlantic Plaza Towers tenants won a halt to facial recognition in their building: Now they’re calling on a moratorium on all residential use
Jan 9, 2020

In the Outcry over the Apple Card, Bias is a Feature, Not a Bug
Nov 22, 2019

Disability, Bias, and AI – Report
Nov 20, 2019

Discriminating Systems: Gender, Race, and Power in AI – Report
Apr 1, 2019
Related Press [13]
Questions swirl about possible racial bias in Twitter
Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute, told CNBC: “Algorithmic discrimination is a reflection of larger patterns of social inequality … it’s about much more than just bias on the part of engineers or even bias in datasets, and will require more than a shallow understanding or set of fixes.”
CNBC
Sep 21, 2020
Fake Data Could Help Solve Machine Learning’s Bias Problem—if We Let It
“That process of creating a synthetic data set, depending on what you’re extrapolating from and how you’re doing that, can actually exacerbate the biases,” says Deb Raji, a technology fellow at the AI Now Institute.
Slate
Sep 17, 2020
‘Encoding the same biases’: Artificial intelligence’s limitations in coronavirus response
"We were seeing AI being used extensively before Covid-19, and during Covid-19 you're seeing an increase in the use of some types of tools," noted Meredith Whittaker, a distinguished research scientist at New York University in the US and co-founder of AI Now Institute, which carries out research examining the social implications of AI.
Horizon
Sep 7, 2020
Predictive policing algorithms are racist. They need to be dismantled.
Police like the idea of tools that give them a heads-up and allow them to intervene early because they think it keeps crime rates down, says Rashida Richardson, director of policy research at the AI Now Institute.
MIT Technology Review
Jul 17, 2020
Bridging The Gender Gap In AI
The gap appears even more stark at the “FANG” companies—according to the AI Now Institute just 15% of AI research staff at Facebook and 10% at Google are women.
Forbes
Feb 17, 2020
The Apple Card algo issue: What you need to know about A.I. in everyday life
“These systems are being trained on data that’s reflective of our wider society,” West said. “Thus, AI is going to reflect and really amplify back past forms of inequality and discrimination.”
CNBC
Nov 14, 2019
In AI, Diversity Is A Business Imperative
So how do we address the need for diversity and prevent bias? New York University's AI Now Institute report suggests that in addition to hiring a more diverse group of candidates, companies must be more transparent about pay and discrimination and harassment reports, among other practices, to create an atmosphere that will welcome a more diverse group of people.
Forbes
Nov 14, 2019
The Racist History Behind Facial Recognition
Customers have used “affect recognition” for everything from measuring how people react to ads to helping children with autism develop social and emotional skills, but a report from the A.I. Now Institute argues that the technology is being “applied in unethical and irresponsible ways.”
The New York Times
Jul 10, 2019
Exposing the Bias Embedded in Tech
The other panel member, Meredith Whittaker, a founder and a director of the AI Now Institute at New York University, noted that voice recognition tools that rely on A.I. often don’t recognize higher-pitched voices.
The New York Times
Jun 17, 2019
The problem with AI? Study says it’s too white and male, calls for more women, minorities
Artificial intelligence technologies are developed mostly in major tech companies such as Facebook, Google, Amazon and Microsoft, and in a small number of university labs, which all tilt white, affluent and male and, in many cases, are only getting more so. Only by adding more women, people of color and other underrepresented groups can artificial intelligence address the bias and create more equitable systems, says Meredith Whittaker, a report author and co-founder of the AI Now Institute.
IMDiversity
Apr 16, 2019
LAPD to scrap some crime data programs after criticism
Rashida Richardson, director of policy research at the AI Now Institute and a co-author of the report, said Smith’s findings mirrored suspicions that police target specific communities. “This shows a larger policing problem,” she said. “None of this is standardized. A lot of this system is one-sided.”
LA Times
Apr 5, 2019
Our Software Is Biased Like We Are. Can New Laws Change That?
An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.
The Wall Street Journal
Mar 23, 2019
Google’s Approval Of $135 Million Payout To Execs Accused Of Sexual Misconduct Sparks Fresh Employee Backlash
Meredith Whittaker, a Google employee organizer and cofounder of the AI Now Institute, told Forbes that the “GooglePayOutsForAll” social media effort is meant to highlight the trade-off Google made in prioritizing payouts to Singhal and Rubin. “Imagine a world where we’re not paying sexual predators over $100 million dollars,” she said. “Where else could those resources go?”
Forbes
Mar 12, 2019