Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice
Law enforcement agencies are increasingly using predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced during documented periods of flawed, racially biased, and sometimes unlawful practices and policies (“dirty policing”). These policing practices and policies shape the environment and the methodology by which data is created, which raises the risk of creating inaccurate, skewed, or systemically biased data (“dirty data”). If predictive policing systems are informed by such data, they cannot escape the legacies of the unlawful or biased policing practices that they are built on. Nor do current claims by predictive policing vendors provide sufficient assurances that their systems adequately mitigate or segregate this data.
Discriminating Systems: Gender, Race, and Power in AI
The diversity problems of the AI industry and the issues of bias in AI systems tend to be considered separately. In this report we suggest that they are two sides of the same problem: issues of discrimination in the workforce and in system building are deeply related. Moreover, tackling the challenges of bias within technical systems requires addressing workforce diversity, and vice versa. Our research points to new ways of understanding the relationships between these complex problems, which can open up new pathways to redressing the current imbalances and harms.
Drawing on a thorough review of existing literature and current research working on issues of gender, race, class, and artificial intelligence, this pilot study examines the scale of AI’s current diversity crisis and possible paths forward. It represents the first stage of a multi-year project examining the intersection of gender, race and power in AI.
AI Now 2018 Report
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.
Algorithmic Accountability Policy Toolkit
AI Now developed a toolkit to help advocates uncover and understand where algorithms are being used in government and to inform advocacy strategies and tactics. The toolkit includes a breakdown of key concepts and questions, an overview of existing research, summaries of algorithmic systems currently used in government, and guidances on advocacy strategies to identify and interrogate the use of these systems.
LITIGATING ALGORITHMS: CHALLENGING GOVERNMENT USE OF ALGORITHMIC DECISION SYSTEMS
In June 2018, AI Now teamed up with NYU Law’s Center on Race, Inequality, and the Law, and the Electronic Frontier Foundation to host a first of its kind workshop critically examining litigation strategies for challenging government use of algorithmic decision systems across different disciplines and areas of law. The Litigating Algorithms report highlights the critical observations, findings, and areas for future discussion and collaboration that were unearthed during this dynamic workshop.
Anatomy of an AI System
How can we visualize and understand the true scale of AI systems? This large-scale map and long-form essay, produced in partnership with SHARE Lab, investigates the human labor, data, and planetary resources required to operate an Amazon Echo.
Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability
Our Algorithmic Impact Assessment Report helps affected communities and stakeholders assess the use of AI and algorithmic decision-making in public agencies and determine where – or if – their use is acceptable. Algorithms in government are already a part of decisions that affect people’s lives, but there are no agreed-upon methods to ensure fairness or safety, or protect the fundamental rights of citizens. Our AIA report provides a practical framework, similar to an environmental impact assessment, for agencies to bring oversight to automated decision systems.
AI Now 2017 Report
Building on the inaugural 2016 report, the AI Now 2017 Report addresses the most recent scholarly literature in order to raise critical social questions that will shape our present and near future. This report focuses on new developments in four areas: labor and automation, bias and inclusion, rights and liberties, and ethics and governance. We identify emerging challenges in each of these areas and make recommendations to ensure that the benefits of AI will be shared broadly, and that risks can be identified and mitigated.
AI Now 2016 Report
This report is a summary of the 2016 AI Now public symposium, hosted by the White House and New York University’s Information Law Institute. The symposium focused on the social and economic impacts of AI over the next ten years, offering space to discuss some of the hard questions we need to consider, and to gather insights from world-leading experts across diverse disciplines.