With key recommendations for the field of artificial intelligence, AI Now Institute Announces 2017 Report With Key Recommendations for the Field of Artificial Intelligence

Our second annual report calls for an end to black box predictive systems in core public institutions like the criminal justice system, and outlines specific approaches needed to address bias in AI and related technologies*

New York, NY – October 18, 2017 – The AI Now Institute, an interdisciplinary research center based at New York University, announced today the publication of its [second annual research report. In advance of AI Now’s official launch in November, the 2017 report surveys the current use of AI across core domains, along with the challenges that the rapid introduction of these technologies are presenting. It also provides a set of ten core recommendations to guide future research and accountability mechanisms. The report focuses on key impact areas, including labor and automation, bias and inclusion, rights and liberties, and ethics and governance.

“The field of artificial intelligence is developing rapidly, and promises to help address some of the biggest challenges we face as a society,” said Kate Crawford, cofounder of AI Now and one of the lead authors of the report. “But the reason we founded the AI Now Institute is that we urgently need more research into the real-world implications of the adoption of AI and related technologies in our most sensitive social institutions. People are already being affected by these systems, be it while at school, looking for a job, reading news online, or interacting with the courts. With this report, we’re taking stock of the progress so far and the biggest emerging challenges that can guide our future research on the social implications of AI.”

The 2017 report comes at a time when AI technologies are being introduced in critical areas like criminal justice, finance, education, and the workplace. And the consequences of incomplete or biased systems can be very real. A team of journalists and technologists at Propublica demonstrated how an algorithm used by courts and law enforcement to predict recidivism in criminal defendants was measurably biased against African Americans. In a different setting, a study at the University of Pittsburgh Medical Center observed that an AI system used to triage pneumonia patients was missing a major risk factor for severe complications. And there are many other high stakes domains where these systems are currently being used, without being tested and assessed for bias and inaccuracy. Indeed, standardized methods for conducting such testing are yet to be developed.

The 2017 report calls for all core public institutions – such as those responsible for criminal justice, healthcare, welfare, and education – to immediately cease using ‘black box’ AI and algorithmic systems and to move toward systems that deliver accountability through mechanisms such as validation, auditing, or public review. It calls on the AI industry to go beyond just recognizing that there is a problem, and to take concrete steps to better understand its effects on different populations, especially marginalized ones. Further, the report identifies the lack of women and underrepresented minorities working in AI as a foundational problem that is most likely having a material impact on AI systems and shaping their effects in society.

“When we talk about the risks involved with AI, there is a tendency to focus on the distant future,” said Meredith Whittaker, cofounder of AI Now and one of the lead authors of the report. “But these systems are already being rolled out in critical institutions, and there are no agreed-upon standards or methods to measure where the data is coming from or how the algorithm will impact real people when deployed ‘in the wild’. This is not something that a technical ‘fix’ can address – bias issues require consideration of underlying structural inequality and historical discrimination. We’re truly worried that the examples uncovered so far are just the tip of the iceberg. It’s imperative that we stop using black box algorithms in core institutions until we have methods for ensuring basic safety and fairness.”

The AI Now 2017 Report is available at this link. It was produced by the AI Now Institute at New York University with support from the John D. and Catherine T. MacArthur Foundation, and co-authored by Kate Crawford, Meredith Whittaker, Alex Campolo, and Madelyn Sanfilippo, with editors Andrew Selbst and Solon Barocas.

About the AI Now Institute

The AI Now Institute at New York University is an interdisciplinary research institute dedicated to understanding the social implications of artificial intelligence. Its work focuses on four core domains: labor and automation, bias and inclusion, rights and liberties, and safety and critical infrastructure.

The AI Now Institute will officially launch in November of 2017. You can learn more at www.ainowinstitute.org.

Read and download the full report here.