The AI Now Institute conducts independent, interdisciplinary, innovative research on the complex social implications of AI. Our research is organized around a set of four core themes: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure. Our work is done in partnership with the emerging community of scholars, policymakers, advocates, industry practitioners, and many others who are focused on these topics.
West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from: https://ainowinstitute.org/discriminatingsystems.html.
Richardson, R., Schultz, J. and Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review Online. Retrieved from: https://www.nyulawreview.org/online-features/dirty-data-bad-predictions-how-civil-rights-violations-impact-police-data-predictive-policing-systems-and-justice/.
Suzor, N., West, S.M., York, J.C. and Quodling, A. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. International Journal of Communication, 13: 1526–1543. Retrieved from: https://ijoc.org/index.php/ijoc/article/view/9736/2610.
Crawford, K. and Campolo, A. (2019) Enchanted Determinism: Power without Control in Artificial Intelligence. (Under review for a forthcoming special issue of Engaging Science, Technology, and Society).
Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S.M., Richarson, R., Schultz, J. and Schwartz, O. (2018) AI Now 2018 Report. AI Now Institute. Retrieved from: https://ainowinstitute.org/AI_Now_2018_Report.html.
AI Now Institute. Algorithmic Accountability Policy Toolkit. (2018). Retrieved from: https://ainowinstitute.org/aap-toolkit.html.
AI Now Institute. Litigating Algorithms: Challenging Government use of Algorithmic Decision Systems. (2018). Retrieved from: https://ainowinstitute.org/litigatingalgorithms.html.
Crawford, K. and Joler, V. (2018). Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources. AI Now Institute and Share Lab. Retrieved from: https://anatomyof.ai.
Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumeé III, H. and Crawford, K. (2018). Datasheets for Datasets. Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning: Stockholm, Sweden. Retrieved from: https://arxiv.org/abs/1803.09010.
West, S.M. (2018). Censored, Suspended, Shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11): 4366-4383. Retrieved from: https://journals.sagepub.com/doi/full/10.1177/1461444818773059.
West, S.M. (2018). Cryptographic Imaginaries and the Networked Public. Internet Policy Review, 7(2). Retrieved from: https://policyreview.info/articles/analysis/cryptographic-imaginaries-and-networked-public.
Reisman, D., Schultz, J., Crawford, K., Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability. AI Now Institute. Retrieved from: https://ainowinstitute.org/aiareport2018.html.
Suzor, N., Van Geelen, T., and West, S.M. (2018). Evaluating the Legitimacy of Platform Governance: A Review of Research and A Shared Research Agenda. International Communication Gazette, 80(4): 385-400.
Barocas, S., Crawford, K., Shapiro, A. and Wallach, H. (2017). The Problem With Bias: Allocative Versus Representational Harms in Machine Learning. 9th Annual Conference of the Special Interest Group for Computing, Information and Society: Philadelphia, PA. Retrieved from: http://meetings.sigcis.org/uploads/6/3/6/8/6368912/program.pdf.
Campolo, A., Sanfilippo, M., Whittaker, M. and Crawford, C. (2017). AI Now 2017 Report. AI Now Institute. Retrieved from: https://ainowinstitute.org/AI_Now_2017_Report.html.
Crawford, K., Whittaker, M., Elish, M.C., Barocas, S., Plasek, A., Ferryman, K. (2016). AI Now 2016 Report. Retrieved from: https://ainowinstitute.org/AI_Now_2016_Report.html.
Rights & Liberties
Artificial intelligence systems are being rapidly deployed in high stakes domains, from healthcare to education to policing and criminal justice, challenging civil rights and liberties and the existing practices designed to protect them. These systems are often proprietary and opaque, even as they inform decisions that shape livelihoods and opportunity across populations. Without contextual knowledge, informed consent, and due processes mechanisms, these systems can create risks that threaten already vulnerable populations.
AI Now will assess the fairness of AI systems for diverse populations, and use these findings to inform AI development best practices, help ensure accountability following deployment of AI technologies, and support advocacy, public discourse, and policy making.
AI Now is partnering with the ACLU and other stakeholders to better understand and address these issues. We are committed to collaborating with advocates and front-line communities to ensure that our research is sensitive to impacts on the ground, answers the questions that are most pressing, and reflects the experiences and concerns of the most vulnerable.
Labor & Automation
Labor is a primary mechanism within modern economies to generate value and provide people with the basic securities of life: food, shelter, and meaning. As AI-driven automation increases, it has the potential to improve efficiency, and to minimize repetitive human drudgery. But if it is not implemented with a view to its wider social implications, it holds the possibility to fundamentally destabilize existing social structures.
In an address at our AI Now Experts Workshop in 2016, President Obama’s Chief Economist, Jason Furman, noted that 83% of low-income jobs in the US are vulnerable to automation. For middle-income work, that number is still as high as 31%. This represents a seismic shift in the history of labor in the US–and the international ramifications will be complex and uneven.
Such serious scenarios deserve sustained empirical attention, but it is equally important to understand how the role of AI and related algorithmic systems are already changing the balance of workplace power. Machine learning techniques are quickly being integrated into management and hiring decisions across many industries. New systems make promises of flexibility and efficiency, but they also intensify the surveillance of workers, who often do not know when and how they are being tracked and evaluated, or why they are hired or fired. Furthermore, AI-assisted forms of management may replace more democratic forms of bargaining between workers and employers, increasing owner power under the guise of technical neutrality.
As such, a key focus for the AI Now Institute is to study the impacts of AI across labor sectors and to research long-term approaches that can mitigate negative consequences, especially on vulnerable and marginalized populations. The data and findings that are generated will be used to support models for effective and socially sustainable governance and policy.
Bias & Inclusion
At their best, AI and algorithmic decision-support systems can be used to augment human judgement and reduce both conscious and unconscious biases. However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural prejudices and inequalities.
We already have evidence of these problems, from voice recognition that doesn’t “hear” women, to Siri giving inadequate instructions to women’s health services, to natural language models that use stereotypical associations like ‘woman’ with ‘receptionist’, to Google’s automated photo tagging system describing African Americans as gorillas. When machine learning is built into complex social systems such as criminal justice, health diagnoses, academic admissions, and hiring and promotion, it may reinforce existing inequalities, regardless of the intentions of the technical developers.
The AI Now Institute will research ways to better understand and mitigate the effects of bias in AI systems – and to assess the state of diversity and inclusion within the AI industry itself.
Safety & Critical Infrastructure
Artificial intelligence is already tasked with decision-making across many critical infrastructures, with more to come. From energy grids to hospitals to financial services, early-stage AI is being applied with the goal of advancing major efficiencies and process improvements. However, integrating new technologies into existing complex systems is a delicate and difficult task. Technical systems fail and human error can magnify this failure. These safety questions are particularly significant when lives are at risk.
Problems with data-inputs, inaccuracies, and faulty reporting can ultimately lead to strained systems and catastrophic outcomes. Without proper planning, assessment, and integration strategies, unintended errors become more likely in the critical systems that we need to trust the most.
AI Now will examine the way in which AI and related technologies are being applied within these domains, and work to understand possibilities for safe and responsible AI integration. With this map, we can better understand safety and security mitigation strategies for a range of potential challenges.