The AI Now Institute conducts independent, interdisciplinary, innovative research on the complex social implications of AI. Our research is organized around a set of four core themes: rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure. Our work is done in partnership with the emerging community of scholars, policymakers, advocates, industry practitioners, and many others who are focused on these topics.
Rights & Liberties
Artificial intelligence systems are being rapidly deployed in high stakes domains, from healthcare to education to policing and criminal justice, challenging civil rights and liberties and the existing practices designed to protect them. These systems are often proprietary and opaque, even as they inform decisions that shape livelihoods and opportunity across populations. Without contextual knowledge, informed consent, and due processes mechanisms, these systems can create risks that threaten already vulnerable populations.
AI Now will assess the fairness of AI systems for diverse populations, and use these findings to inform AI development best practices, help ensure accountability following deployment of AI technologies, and support advocacy, public discourse, and policy making.
AI Now is partnering with the ACLU and other stakeholders to better understand and address these issues. We are committed to collaborating with advocates and front-line communities to ensure that our research is sensitive to impacts on the ground, answers the questions that are most pressing, and reflects the experiences and concerns of the most vulnerable.
Labor & Automation
Labor is a primary mechanism within modern economies to generate value and provide people with the basic securities of life: food, shelter, and meaning. As AI-driven automation increases, it has the potential to improve efficiency, and to minimize repetitive human drudgery. But if it is not implemented with a view to its wider social implications, it holds the possibility to fundamentally destabilize existing social structures.
In an address at our AI Now Experts Workshop in 2016, President Obama’s Chief Economist, Jason Furman, noted that 83% of low-income jobs in the US are vulnerable to automation. For middle-income work, that number is still as high as 31%. This represents a seismic shift in the history of labor in the US–and the international ramifications will be complex and uneven.
Such serious scenarios deserve sustained empirical attention, but it is equally important to understand how the role of AI and related algorithmic systems are already changing the balance of workplace power. Machine learning techniques are quickly being integrated into management and hiring decisions across many industries. New systems make promises of flexibility and efficiency, but they also intensify the surveillance of workers, who often do not know when and how they are being tracked and evaluated, or why they are hired or fired. Furthermore, AI-assisted forms of management may replace more democratic forms of bargaining between workers and employers, increasing owner power under the guise of technical neutrality.
As such, a key focus for the AI Now Institute is to study the impacts of AI across labor sectors and to research long-term approaches that can mitigate negative consequences, especially on vulnerable and marginalized populations. The data and findings that are generated will be used to support models for effective and socially sustainable governance and policy.
Bias & Inclusion
At their best, AI and algorithmic decision-support systems can be used to augment human judgement and reduce both conscious and unconscious biases. However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural prejudices and inequalities.
We already have evidence of these problems, from voice recognition that doesn’t “hear” women, to Siri giving inadequate instructions to women’s health services, to natural language models that use stereotypical associations like ‘woman’ with ‘receptionist’, to Google’s automated photo tagging system describing African Americans as gorillas. When machine learning is built into complex social systems such as criminal justice, health diagnoses, academic admissions, and hiring and promotion, it may reinforce existing inequalities, regardless of the intentions of the technical developers.
The AI Now Institute will research ways to better understand and mitigate the effects of bias in AI systems – and to assess the state of diversity and inclusion within the AI industry itself.
Safety & Critical Infrastructure
Artificial intelligence is already tasked with decision-making across many critical infrastructures, with more to come. From energy grids to hospitals to financial services, early-stage AI is being applied with the goal of advancing major efficiencies and process improvements. However, integrating new technologies into existing complex systems is a delicate and difficult task. Technical systems fail and human error can magnify this failure. These safety questions are particularly significant when lives are at risk.
Problems with data-inputs, inaccuracies, and faulty reporting can ultimately lead to strained systems and catastrophic outcomes. Without proper planning, assessment, and integration strategies, unintended errors become more likely in the critical systems that we need to trust the most.
AI Now will examine the way in which AI and related technologies are being applied within these domains, and work to understand possibilities for safe and responsible AI integration. With this map, we can better understand safety and security mitigation strategies for a range of potential challenges.