The Growing Marketplace For AI Ethics
Companies at all stages on the AI development spectrum want to know where public policy on intelligent systems is headed. The work being produced by New York–based research institute AI Now offers a road map.
Forbes
Mar 27, 2019
Data Quality: The Risks Of Dirty Data And AI
The AI Now Institute recently published an assertive list of some of the lowlights in AI ethics over the past year, along with a report highlighting the growing ethical risks in surveillance.
Forbes
Mar 27, 2019
4 Industries That Feel The Urgency Of AI Ethics
According to Meredith Whittaker, co-founder and co-director of the AI Now Institute at NYU, this is only the tip of the ethical iceberg. Accountability and liability are open and pressing issues that society must address as autonomous vehicles take over roadways.
Forbes
Mar 27, 2019
Our Software Is Biased Like We Are. Can New Laws Change That?
An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.
The Wall Street Journal
Mar 23, 2019
Google’s Approval Of $135 Million Payout To Execs Accused Of Sexual Misconduct Sparks Fresh Employee Backlash
Meredith Whittaker, a Google employee organizer and cofounder of the AI Now Institute, told Forbes that the “GooglePayOutsForAll” social media effort is meant to highlight the trade-off Google made in prioritizing payouts to Singhal and Rubin. “Imagine a world where we’re not paying sexual predators over $100 million dollars,” she said. “Where else could those resources go?”
Forbes
Mar 12, 2019
Business Leaders Set the A.I. Agenda
It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect.
The New York Times
Mar 3, 2019
Read This. Then Put Away Your Phone.
“People are recognizing that there are issues, and they want to change it,” Ms. Whittaker said. “But has the objective function of these corporations changed? They’re still major corporations at a time of neoliberal capitalism that are optimizing their products for shareholder value.”
The New York Times
Mar 1, 2019
Is Ethical A.I. Even Possible?
“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence.
The New York Times
Mar 1, 2019
A Crucial Step for Averting AI Disasters
The findings have spurred calls for closer scrutiny. Microsoft recently called on governments to regulate facial-recognition technology and to require auditing of systems for accuracy and bias. The AI Now Institute, a research group at New York University, is studying ways to reduce bias in AI systems
The New York Times
Feb 13, 2019
AI’s accountability gap
The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Jan 10, 2019
Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”
Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads.
The Intercept
Dec 6, 2018
Could a bold move by John Hanckock upend the insurance industry?
But some are wary of the burden companies take on with such massive quantities of consumer information. Kate Crawford, founder of the AI Now Institute, tweeted about the pitfalls of the Vitality program after John Hancock announced its widespread implementation: false data, consumers trying to game the system, “intimate” surveillance and data breaches.
The Washington Post
Oct 4, 2018
California just replaced cash bail with algorithms
Rashida Richardson, policy director for AI Now, a nonprofit think tank dedicated to studying the societal impact of AI, tells Quartz that she’s skeptical that this system will be less biased than its predecessors.
Quartz
Sep 4, 2018
ACLU Podcast: How to fight an algorithm
What does all this mean for our civil liberties? And how can the public exercise oversight of a secret algorithm? AI Now Co-founder Meredith Whittaker discusses this brave new world — and the ways we can keep it in check.
ACLU
Aug 2, 2018
Facial recognition gives police a powerful new tracking tool. It’s also raising alarms.
“There needs to be greater transparency around the use of these technologies,” said Rashida Richardson, director of policy research at the AI Now Institute at New York University. “And a more open, public conversation about what types of use cases we are comfortable with — and what types of use cases should just not be available.”
Jul 30, 2018
Bias detectives – the researchers striving to make algorithms fair
As machine learning infiltrates society, scientists grapple with how to make algorithms fair
Nature
Jun 20, 2018
Silicon Valley is stumped – Even AI cannot always remove bias from hiring
Cofounder Meredith Whittaker raises concerns about AI products used in sourcing and hiring employees, questions their claims to "remove bias" from hiring, and calls for increased oversight and accountability.
CNBC
May 30, 2018
If Machines Take Over, Who Will Be in Charge?
That limited group is introducing human biases into algorithms, warned Kate Crawford, co-founder of AI Now Research Institute. “In some ways, the people designing these systems are the least well-trained to think about the problems,” she said.
The Wall Street Journal
May 10, 2018