The AI Now Institute conducts independent, interdisciplinary, innovative research on the complex social implications of AI. Our research is organized around a set of four core themes: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure. Our work is done in partnership with the emerging community of scholars, policymakers, advocates, industry practitioners, and many others who are focused on these topics.

Recent Publications

West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from:

Richardson, R., Schultz, J. and Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review Online. Retrieved from:

Suzor, N., West, S.M., York, J.C. and Quodling, A. (2019). What Do We Mean When We Talk About Transparency? Toward Meaningful Transparency in Commercial Content Moderation. International Journal of Communication, 13: 1526–1543. Retrieved from:

Crawford, K. and Campolo, A. (2019) Enchanted Determinism: Power without Control in Artificial Intelligence. (Under review for a forthcoming special issue of Engaging Science, Technology, and Society).

Whittaker, M., Crawford, K., Dobbe, R., Fried, G., Kaziunas, E., Mathur, V., West, S.M., Richarson, R., Schultz, J. and Schwartz, O. (2018) AI Now 2018 Report. AI Now Institute. Retrieved from:

AI Now Institute. Algorithmic Accountability Policy Toolkit. (2018). Retrieved from:

AI Now Institute. Litigating Algorithms: Challenging Government use of Algorithmic Decision Systems. (2018). Retrieved from:

Crawford, K. and Joler, V. (2018). Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources. AI Now Institute and Share Lab. Retrieved from:

Gebru, T., Morgenstern, J., Vecchione, B., Wortman Vaughan, J., Wallach, H., Daumeé III, H. and Crawford, K. (2018). Datasheets for Datasets. Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning: Stockholm, Sweden. Retrieved from:

West, S.M. (2018). Censored, Suspended, Shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11): 4366-4383. Retrieved from:

West, S.M. (2018). Cryptographic Imaginaries and the Networked Public. Internet Policy Review, 7(2). Retrieved from:

Reisman, D., Schultz, J., Crawford, K., Whittaker, M. (2018). Algorithmic Impact Assessments: A Practical Framework For Public Agency Accountability. AI Now Institute. Retrieved from:

Suzor, N., Van Geelen, T., and West, S.M. (2018). Evaluating the Legitimacy of Platform Governance: A Review of Research and A Shared Research Agenda. International Communication Gazette, 80(4): 385-400.

Barocas, S., Crawford, K., Shapiro, A. and Wallach, H. (2017). The Problem With Bias: Allocative Versus Representational Harms in Machine Learning. 9th Annual Conference of the Special Interest Group for Computing, Information and Society: Philadelphia, PA. Retrieved from:

Campolo, A., Sanfilippo, M., Whittaker, M. and Crawford, C. (2017). AI Now 2017 Report. AI Now Institute. Retrieved from:

Crawford, K., Whittaker, M., Elish, M.C., Barocas, S., Plasek, A., Ferryman, K. (2016). AI Now 2016 Report. Retrieved from: