Our Software Is Biased Like We Are. Can New Laws Change That?
An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.
The Wall Street Journal
Mar 23, 2019
Google’s Approval Of $135 Million Payout To Execs Accused Of Sexual Misconduct Sparks Fresh Employee Backlash
Meredith Whittaker, a Google employee organizer and cofounder of the AI Now Institute, told Forbes that the “GooglePayOutsForAll” social media effort is meant to highlight the trade-off Google made in prioritizing payouts to Singhal and Rubin. “Imagine a world where we’re not paying sexual predators over $100 million dollars,” she said. “Where else could those resources go?”
Forbes
Mar 12, 2019
Business Leaders Set the A.I. Agenda
It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect.
The New York Times
Mar 3, 2019
Read This. Then Put Away Your Phone.
“People are recognizing that there are issues, and they want to change it,” Ms. Whittaker said. “But has the objective function of these corporations changed? They’re still major corporations at a time of neoliberal capitalism that are optimizing their products for shareholder value.”
The New York Times
Mar 1, 2019
Is Ethical A.I. Even Possible?
“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence.
The New York Times
Mar 1, 2019
A Crucial Step for Averting AI Disasters
The findings have spurred calls for closer scrutiny. Microsoft recently called on governments to regulate facial-recognition technology and to require auditing of systems for accuracy and bias. The AI Now Institute, a research group at New York University, is studying ways to reduce bias in AI systems
The New York Times
Feb 13, 2019
AI’s accountability gap
The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Jan 10, 2019
Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”
Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads.
The Intercept
Dec 6, 2018
Could a bold move by John Hanckock upend the insurance industry?
But some are wary of the burden companies take on with such massive quantities of consumer information. Kate Crawford, founder of the AI Now Institute, tweeted about the pitfalls of the Vitality program after John Hancock announced its widespread implementation: false data, consumers trying to game the system, “intimate” surveillance and data breaches.
The Washington Post
Oct 4, 2018
California just replaced cash bail with algorithms
Rashida Richardson, policy director for AI Now, a nonprofit think tank dedicated to studying the societal impact of AI, tells Quartz that she’s skeptical that this system will be less biased than its predecessors.
Quartz
Sep 4, 2018
ACLU Podcast: How to fight an algorithm
ACLU
Aug 2, 2018
Facial recognition gives police a powerful new tracking tool. It’s also raising alarms.
“There needs to be greater transparency around the use of these technologies,” said Rashida Richardson, director of policy research at the AI Now Institute at New York University. “And a more open, public conversation about what types of use cases we are comfortable with — and what types of use cases should just not be available.”
Jul 30, 2018
Bias detectives – the researchers striving to make algorithms fair
As machine learning infiltrates society, scientists grapple with how to make algorithms fair
Nature
Jun 20, 2018
Silicon Valley is stumped – Even AI cannot always remove bias from hiring
Cofounder Meredith Whittaker raises concerns about AI products used in sourcing and hiring employees, questions their claims to "remove bias" from hiring, and calls for increased oversight and accountability.
CNBC
May 30, 2018
If Machines Take Over, Who Will Be in Charge?
That limited group is introducing human biases into algorithms, warned Kate Crawford, co-founder of AI Now Research Institute. “In some ways, the people designing these systems are the least well-trained to think about the problems,” she said.
The Wall Street Journal
May 10, 2018
AI Experts Want Government Algorithms to Be Studied Like Environmental Hazards
AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.
Quartz
Apr 9, 2018
How Coders Are Fighting Bias in Facial Recognition Software
“Lots of companies are now taking these things seriously, but the playbook for how to fix them is still being written,” says Meredith Whittaker, co-director of AI Now, an institute focused on ethics and artificial intelligence at New York University.
Wired
Mar 29, 2018
When Will Americans Be Angry Enough To Demand Honesty About Algorithms?
This week the AI Now Institute, a leading group studying the topic, published its own proposal. It’s called an “Algorithmic Impact Assessment,” or AIA, and it’s essentially an environmental impact report for automated software used by governments. “A similar process should take place before an agency deploys a new, high-impact automated decision system,” the group writes.
Fast Company
Feb 21, 2018