Could a bold move by John Hanckock upend the insurance industry?
But some are wary of the burden companies take on with such massive quantities of consumer information. Kate Crawford, founder of the AI Now Institute, tweeted about the pitfalls of the Vitality program after John Hancock announced its widespread implementation: false data, consumers trying to game the system, “intimate” surveillance and data breaches.
The Washington Post
Oct 4, 2018
AI Now Law and Policy Reading List
AI Now Institute
Oct 1, 2018
AI Now 2018 Symposium
AI Now Institute
Sep 6, 2018
California just replaced cash bail with algorithms
Rashida Richardson, policy director for AI Now, a nonprofit think tank dedicated to studying the societal impact of AI, tells Quartz that she’s skeptical that this system will be less biased than its predecessors.
Quartz
Sep 4, 2018
ACLU Podcast: How to fight an algorithm
ACLU
Aug 2, 2018
Facial recognition gives police a powerful new tracking tool. It’s also raising alarms.
“There needs to be greater transparency around the use of these technologies,” said Rashida Richardson, director of policy research at the AI Now Institute at New York University. “And a more open, public conversation about what types of use cases we are comfortable with — and what types of use cases should just not be available.”
NBC News
Jul 30, 2018
You and AI – Machine learning, bias and implications for inequality
AI Now Institute
Jul 17, 2018
The New Age of Innovation: Government’s Role in Artificial Intelligence
Policy director Rashida Richardson discusses regulation of AI and its use in government on a panel alongside two members of congress and the head of the IT Industry Council.
AI Now Institute
Jul 11, 2018
Litigating Algorithms Workshop
AI Now Institute
Jun 22, 2018
Bias detectives – the researchers striving to make algorithms fair
As machine learning infiltrates society, scientists grapple with how to make algorithms fair
Nature
Jun 20, 2018
Litigating Algorithms
AI Now Institute
Jun 13, 2018
Silicon Valley is stumped – Even AI cannot always remove bias from hiring
Cofounder Meredith Whittaker raises concerns about AI products used in sourcing and hiring employees, questions their claims to "remove bias" from hiring, and calls for increased oversight and accountability.
CNBC
May 30, 2018
If Machines Take Over, Who Will Be in Charge?
That limited group is introducing human biases into algorithms, warned Kate Crawford, co-founder of AI Now Research Institute. “In some ways, the people designing these systems are the least well-trained to think about the problems,” she said.
The Wall Street Journal
May 10, 2018
AI Experts Want Government Algorithms to Be Studied Like Environmental Hazards
AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.
Quartz
Apr 9, 2018
How Coders Are Fighting Bias in Facial Recognition Software
“Lots of companies are now taking these things seriously, but the playbook for how to fix them is still being written,” says Meredith Whittaker, co-director of AI Now, an institute focused on ethics and artificial intelligence at New York University.
Wired
Mar 29, 2018
AI and Ethics: People, Robots and Society at Transformers: Artificial Intelligence | Washington Post Live
AI Now Institute
Mar 20, 2018
When Will Americans Be Angry Enough To Demand Honesty About Algorithms?
This week the AI Now Institute, a leading group studying the topic, published its own proposal. It’s called an “Algorithmic Impact Assessment,” or AIA, and it’s essentially an environmental impact report for automated software used by governments. “A similar process should take place before an agency deploys a new, high-impact automated decision system,” the group writes.
Fast Company
Feb 21, 2018
Artificial intelligence is going to supercharge surveillance
“The data they have is from police body cams, which tells us a lot about who an individual police officer may profile, but doesn’t give us a full picture,” says Whittaker. “There’s a real danger with this that we are universalizing biased pictures of criminality and crime.”
The Verge
Jan 23, 2018