The AI Now Institute aims to produce interdisciplinary research and public engagement to help ensure that AI systems are accountable to the communities and contexts in which they’re applied.
Our mission is to produce rigorous, interdisciplinary, and strategic research to inform public discourse around the social implications of AI.
Research
More Research »-
[Water Justice and Technology Report](https://ainowinstitute.org/water-justice-technology.html)
Water Justice and Technology Report
The AI Now Institute's Theodora Dryer and The Center for Interdisciplinary Environmental Justice (CIEJ) new report explores how governments have used recent crises to pass a wave of water “relief” policies that not only expand the footprint of technology in the water domain, but also exacerbate water commodification, environmental racism, and economic extraction.
-
Algorithmic Accountability for the Public Sector Report
by Ada Lovelace Institute, AI Now Institute and Open Government Partnership.
-
cover image for Counterpoints: A San Francisco Bay Area Atlas of Displacement & Resistance
Counterpoints: A San Francisco Bay Area Atlas of Displacement & Resistance
by Anti-Eviction Mapping Project
-
cover image for Suspect Development Systems: Databasing Marginality and Enforcing Discipline
Suspect Development Systems: Databasing Marginality and Enforcing Discipline
By Rashida Richardson & Amba Kak
Policy
More Policy »-
cover image for Suspect Development Systems: Databasing Marginality and Enforcing Discipline
Suspect Development Systems: Databasing Marginality and Enforcing Discipline
By Rashida Richardson & Amba Kak
-
[Water Justice and Technology Report](https://ainowinstitute.org/water-justice-technology.html)
Water Justice and Technology Report
AI Now Institute's Theodora Dryer and The Center for Interdisciplinary Environmental Justice (CIEJ). The Covid-19 Crisis, Computational Resource Control, and water relief policy
-
Algorithmic Accountability for the Public Sector Report
by Ada Lovelace Institute, AI Now Institute and Open Government Partnership.
Learning from the first wave of policy implementation
-
cover image for China in Global Tech Discourse
China in Global Tech Discourse
By Meredith Whittaker, Shazeda Ahmed, and Amba Kak
We're launching an essay series exploring the myths, realities, actors, and incentives underpinning dominant China tech and AI narratives.
News
More News »-
Erin McElroy, Meredith Whittaker & Nicole E. Weber on how landlords, bosses, and schools' intrusions of surveillance technologies into the home extends the carceral state into domestic space.
-
Smartphones, sensors and consumer habits reveal much about society. Too few people have a say in how these data are created and used.
Everyone should decide how their digital data are used — not just tech companies
-
Amba Kak and Ben Green challenge the global convergence toward policies requiring human oversight of AI.
The False Comfort of Human Oversight as an Antidote to A.I. Harm
-
“This is about actively creating a technology that can be put to harmful uses rather than identifying and mitigating vulnerabilities in existing technology,” Sarah Myers West, a researcher for the AI Now Institute, told Motherboard. “Researchers aren’t always going to be well-placed to make these assessments on their own. That’s...
Events
More Events »- cover image for Ethical, Legal & Social Implications of machine learning in genomics | National Human Genome Research Institute | Varoon Mathur cover image for Ethical, Legal & Social Implications of machine learning in genomics | National Human Genome Research Institute | Varoon Mathur
- cover image for Whistleblower Aid + AI Now: How Tech Workers Can Blow the Whistle Workshop cover image for Whistleblower Aid + AI Now: How Tech Workers Can Blow the Whistle Workshop
- cover image for 2019 Symposium cover image for 2019 Symposium
-
cover image for AI Now Thursdays: Ryan Moritz: Important Bird Opera
AI Now Thursdays: Ryan Moritz: Important Bird Opera