AI Now 2019 Report
From tenant rights groups opposing facial recognition in housing to Latinx activists and students protesting lucrative tech company contracts with military and border agencies, this year we saw community groups, researchers, and workers demand a halt to risky and dangerous AI technologies. AI Now’s 2019 report spotlights these growing movements, examining the coalitions involved and the research, arguments, and tactics used. We also examine the specific harms these coalitions are resisting, and offer 12 recommendations on what policymakers, advocates and researchers can do to address the use of AI in ways that widen inequality.
Confronting Black Boxes: A Shadow Report of the New York City Automated Decision System Task Force
In 2017, New York City became the first US jurisdiction to create a task force to come up with recommendations for government use of Automated decision systems (ADS). Possessing one of the largest municipal budgets and some of the largest municipal agencies in the world, New York City was thought to be an ideal laboratory for evaluating the actual risks, opportunities, and obstacles involved in government use of ADS, as well as the feasibility of interventions and solutions primarily explored in academic research. Confronting Black Boxes is a community powered shadow report that provides a comprehensive record of what happened during the Task Force’s review process and offers other municipalities and governments robust recommendations based on collective experience and current research insights on government use of ADS.
Disability, Bias, and AI
AI systems are being rapidly integrated into core social domains, informing decisions about who gets resources and opportunity, and who doesn’t. These systems, often marketed as smarter, better, and more objective, have been shown repeatedly to produce biased and erroneous outputs. And while much AI bias research and reporting has focused on race and gender, there has been much less attention paid to AI bias and disability. Our latest report, Disability, Bias, and AI, draws from a wealth of research from disability advocates and scholars. In it we examine what disability studies and activism can tell us about the risks and possibilities of AI, and how attention to disability complicates our current approach to “debiasing” AI.
Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems
Algorithmic decision systems (ADS) are often sold as offering a number of benefits, from mitigating human bias and error, to cutting costs and increasing efficiency, accuracy, and reliability. Yet proof of these advantages is rarely offered, even as evidence of harm increases. Within health care, criminal justice, education, employment, and other areas, the implementation of these technologies has resulted in numerous problems with profound effects on millions of peoples’ lives. Litigation has become a valuable tool for understanding the concrete and real impacts of flawed ADS holding government and ADS vendors accountable when these systems harm us. Following up on our 2018 report, Litigating Algorithms 2019 U.S. Report: New Challenges to Government Use of Algorithmic Decision Systems examines recent U.S. lawsuits brought against government use of ADS, and how fighting these systems in the court has helped mitigate some of the harm caused by these systems.
Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice
Law enforcement agencies are increasingly using predictive policing systems to forecast criminal activity and allocate police resources. Yet in numerous jurisdictions, these systems are built on data produced during documented periods of flawed, racially biased, and sometimes unlawful practices and policies (“dirty policing”). These policing practices and policies shape the environment and the methodology by which data is created, which raises the risk of creating inaccurate, skewed, or systemically biased data (“dirty data”). If predictive policing systems are informed by such data, they cannot escape the legacies of the unlawful or biased policing practices that they are built on. Nor do current claims by predictive policing vendors provide sufficient assurances that their systems adequately mitigate or segregate this data.
Discriminating Systems: Gender, Race, and Power in AI
The diversity problems of the AI industry and the issues of bias in AI systems tend to be considered separately. In this report we suggest that they are two sides of the same problem: issues of discrimination in the workforce and in system building are deeply related. Moreover, tackling the challenges of bias within technical systems requires addressing workforce diversity, and vice versa. Our research points to new ways of understanding the relationships between these complex problems, which can open up new pathways to redressing the current imbalances and harms.
Drawing on a thorough review of existing literature and current research working on issues of gender, race, class, and artificial intelligence, this pilot study examines the scale of AI’s current diversity crisis and possible paths forward. It represents the first stage of a multi-year project examining the intersection of gender, race and power in AI.
AI Now 2018 Report
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem, and provides 10 practical recommendations that can help create accountability frameworks capable of governing these powerful technologies.
Algorithmic Accountability Policy Toolkit
AI Now developed a toolkit to help advocates uncover and understand where algorithms are being used in government and to inform advocacy strategies and tactics. The toolkit includes a breakdown of key concepts and questions, an overview of existing research, summaries of algorithmic systems currently used in government, and guidances on advocacy strategies to identify and interrogate the use of these systems.
LITIGATING ALGORITHMS: CHALLENGING GOVERNMENT USE OF ALGORITHMIC DECISION SYSTEMS
In June 2018, AI Now teamed up with NYU Law’s Center on Race, Inequality, and the Law, and the Electronic Frontier Foundation to host a first of its kind workshop critically examining litigation strategies for challenging government use of algorithmic decision systems across different disciplines and areas of law. The Litigating Algorithms report highlights the critical observations, findings, and areas for future discussion and collaboration that were unearthed during this dynamic workshop.
Anatomy of an AI System
How can we visualize and understand the true scale of AI systems? This large-scale map and long-form essay, produced in partnership with SHARE Lab, investigates the human labor, data, and planetary resources required to operate an Amazon Echo.
Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability
Our Algorithmic Impact Assessment Report helps affected communities and stakeholders assess the use of AI and algorithmic decision-making in public agencies and determine where – or if – their use is acceptable. Algorithms in government are already a part of decisions that affect people’s lives, but there are no agreed-upon methods to ensure fairness or safety, or protect the fundamental rights of citizens. Our AIA report provides a practical framework, similar to an environmental impact assessment, for agencies to bring oversight to automated decision systems.
AI Now 2017 Report
Building on the inaugural 2016 report, the AI Now 2017 Report addresses the most recent scholarly literature in order to raise critical social questions that will shape our present and near future. This report focuses on new developments in four areas: labor and automation, bias and inclusion, rights and liberties, and ethics and governance. We identify emerging challenges in each of these areas and make recommendations to ensure that the benefits of AI will be shared broadly, and that risks can be identified and mitigated.
AI Now 2016 Report
This report is a summary of the 2016 AI Now public symposium, hosted by the White House and New York University’s Information Law Institute. The symposium focused on the social and economic impacts of AI over the next ten years, offering space to discuss some of the hard questions we need to consider, and to gather insights from world-leading experts across diverse disciplines.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.