Related Publications [4]

Algorithmic Accountability Policy Toolkit
Oct 9, 2018

Taking Algorithms To Court
Sep 24, 2018

Letter to the FTC on protecting consumer rights
Aug 22, 2018

Algorithmic Impact Assessments Report: A Practical Framework for Public Agency Accountability
Apr 9, 2018
Related Press [8]
Google’s brand-new AI ethics board is already falling apart
“The frameworks presently governing AI are not capable of ensuring accountability,” a review of AI ethical governance by the AI Now Institute concluded in November.
Vox
Apr 3, 2019
4 Industries That Feel The Urgency Of AI Ethics
According to Meredith Whittaker, co-founder and co-director of the AI Now Institute at NYU, this is only the tip of the ethical iceberg. Accountability and liability are open and pressing issues that society must address as autonomous vehicles take over roadways.
Forbes
Mar 27, 2019
AI Regulation: It’s Time For Training Wheels
Most companies already designing and using artificial intelligence (AI) today recognize the need for guiding principles and a model for governance. Some are actively developing their own ethical AI policies, while others are collaborating with nonprofit organizations—such as the AI Now Institute or the Partnership on AI—grappling with the same issues.
Forbes
Mar 27, 2019
Is Ethical A.I. Even Possible?
“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence.
The New York Times
Mar 1, 2019
AI’s accountability gap
The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Jan 10, 2019
AI Experts Want Government Algorithms to Be Studied Like Environmental Hazards
AI Now, a nonprofit founded to study the societal impacts of AI, said an algorithmic impact assessment (AIA) would assure that the public and governments understand the scope, capability, and secondary impacts an algorithm could have, and people could voice concerns if an algorithm was behaving in a biased or unfair way.
Quartz
Apr 9, 2018
When Will Americans Be Angry Enough To Demand Honesty About Algorithms?
This week the AI Now Institute, a leading group studying the topic, published its own proposal. It’s called an “Algorithmic Impact Assessment,” or AIA, and it’s essentially an environmental impact report for automated software used by governments. “A similar process should take place before an agency deploys a new, high-impact automated decision system,” the group writes.
Fast Company
Feb 21, 2018
AI experts want to end ‘black box’ algorithms in government
The AI Now report calls for agencies to refrain from what it calls “black box” systems opaque to outside scrutiny. Kate Crawford, a researcher at Microsoft and cofounder of AI Now, says citizens should be able to know how systems making decisions about them operate and have been tested or validated.
Wired
Oct 18, 2017