AI Experts Want Government Algorithms to Be Studied Like Environmental Hazards
AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.
Quartz
Apr 9, 2018
How Coders Are Fighting Bias in Facial Recognition Software
“Lots of companies are now taking these things seriously, but the playbook for how to fix them is still being written,” says Meredith Whittaker, co-director of AI Now, an institute focused on ethics and artificial intelligence at New York University.
Wired
Mar 29, 2018
When Will Americans Be Angry Enough To Demand Honesty About Algorithms?
This week the AI Now Institute, a leading group studying the topic, published its own proposal. It’s called an “Algorithmic Impact Assessment,” or AIA, and it’s essentially an environmental impact report for automated software used by governments. “A similar process should take place before an agency deploys a new, high-impact automated decision system,” the group writes.
Fast Company
Feb 21, 2018
Artificial intelligence is going to supercharge surveillance
“The data they have is from police body cams, which tells us a lot about who an individual police officer may profile, but doesn’t give us a full picture,” says Whittaker. “There’s a real danger with this that we are universalizing biased pictures of criminality and crime.”
The Verge
Jan 23, 2018
Artificial Intelligence Seeks An Ethical Conscience
Crawford told the audience that more troubling errors are surely brewing behind closed doors, as companies and governments adopt machine learning in areas such as criminal justice, and finance. “The common examples I’m sharing today are just the tip of the iceberg,” she said.
Wired
Dec 7, 2017
Studying Artificial Intelligence At New York University
New York University just opened an institute that studies the social implications of artificial intelligence. NPR’s Linda Wertheimer talks with co-founder Kate Crawford.
NPR
Nov 26, 2017
The field of AI research is about to get way bigger than code
Kate Crawford, principal researcher at Microsoft Research, and Meredith Whittaker, founder of Open Research at Google, want to change that. They announced today the AI Now Institute, a research organization to explore how AI is affecting society at large. AI Now will be cross-disciplinary, bridging the gap between data scientists, lawyers, sociologists, and economists studying the implementation of artificial intelligence.
Quartz
Nov 14, 2017
AI experts want to end ‘black box’ algorithms in government
The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated.
Wired
Oct 18, 2017