Outlining a New AI Research Agenda

The recent walkout protesting Google’s history of harassment and discrimination was a landmark moment for tech and labor organizing.¹ Collective action by 20,000 Google workers in 50 cities around the world pushed the company to end the forced arbitration of sexual harassment cases. While this is a remarkable success, the majority of the workers’ demands have not yet been met.

Much of the media coverage about the walkout has focused on sexual harassment of women. But the issues go even further. The Google workers argued that sexual harassment and abuse in the workplace are not only about gender and sexuality, but are interconnected with other forms of racialized discrimination that point to entrenched abuses of power.² In targeting the structural practices that serve to maintain inequality, these workers are echoing the demands of social justice movements throughout history: to call out the abuse of power, and to unite the many people who are exploited and discriminated against in demanding an end to these practices, be they contract workers, junior employees, or the many others who experience marginalization.

The walkout builds on a much longer legacy of tech organizing and resistance: in it, there are echoes of the work of Computer Professionals for Social Responsibility, an alliance of computer scientists that, among other things, organized in opposition to the use of artificial intelligence in the Reagan-era Star Wars system. The organizers’ demands are also reminiscent of those of a collective of female MIT grad students in the Computer Science and Artificial Intelligence Labs in the 1980s, who thoroughly documented their experiences of discrimination in the report “Barriers to Equality in Academia: Women in Computer Science at MIT”.

Over the past year, the AI Now Institute has been examining many of these political and historical intersections through our multi-year research program focused on gender, race, and power in AI. We will shortly publish a report and an academic paper with the first phase of research findings. In light of recent events, we wanted to provide a preview of some of our work as a contribution to the emerging movement and the discussion around it.

Discriminatory practices can be reflected and amplified in AI systems.

AI systems — which Google and others are rapidly developing and deploying in sensitive social and political domains — can mirror, amplify, and obscure the very issues of inequality and discrimination that Google workers are protesting against. Over the past year, researchers and journalists have highlighted numerous examples where AI systems exhibited biases, including on the basis of race, class, gender, and sexuality.

We saw a dramatic example of these problems in recent news of Amazon’s automated hiring tool. In order to “learn” to differentiate between “good” and “bad” job candidates, it was trained on a massive corpus of data documenting the company’s past hiring decisions. The result was, perhaps unsurprisingly, a hiring tool that discriminated against women, even demoting CVs that contained the word ‘women’ or ‘women’s’. Amazon engineers tried to fix the problem, adjusting the algorithm in the attempt to mitigate its biased preferences, but ultimately scrapped the project, concluding that it was unsalvageable.³

As many observed, this illustrates the intertwined nature of these issues: the ranking system reflected longstanding discriminatory hiring patterns at Amazon, patterns that would then be amplified by the automated systems. As Wendy Chun observed during her visiting professorship at AI Now, these tools are unintentionally performing a diagnostic service: they show how skewed and lopsided the existing pool of candidates and employees in the tech sector has become.

This example also demonstrates in a particularly visible manner how an exclusionary culture — one that allocates power to some and keeps others out — is mirrored by the technologies that are developed within that culture. This is an issue with far-ranging ramifications, as AI tools are increasingly integrated into social domains like employment, child welfare, and healthcare systems.

We don’t know enough about diversity in the field of AI — but what we do know is troubling.

The Google walkout points to a problem across tech culture as a whole: the field can be particularly hostile for women, trans and non-binary people and people of color.

In terms of the AI field in particular, we lack data and robust research. The only recent research on gender diversity figures in AI comes from a limited survey by WIRED Magazine and Element AI, which found that only 12% of the authors of papers at the top three machine learning conferences were women (implying the field is less gender diverse than computer science overall, or the tech industry as a whole). We have no data that illustrates the state of racial diversity, or its intersections with gender in the field.

Especially if, as the Amazon example suggests, there is a link between exclusionary tech cultures and discriminatory products, these indicators suggest reason for serious concern. Our review of the scholarly literature in this space demonstrates that while the AI field has accelerated, and with it funding for technical AI research, issues of the culture and power structures within the AI industry have not been closely examined.

New directions for research

The limited number of existing studies on bias in AI often neglect to examine gender’s relationship to other forms of identity, like race, class, sexuality, and ability, or the way in which forms of discrimination intersect and serve to shore up existing power structures. They also overwhelmingly treat gender as a binary variable, thus excluding trans and nonbinary peoples’ experiences.

Addressing the demands raised by tech workers requires moving beyond what’s commonly framed as the ‘pipeline problem’: the idea that these problems will be fixed by increasing the diversity of the talent pool tech companies can draw from. But, as Sara Ahmed writes in her 2012 book On Being Included: Racism and Diversity in Institutional Life, diversity campaigns can themselves serve as a means of avoiding confrontation with sexism or racism — they have a marketing value, offered as an optimistic narrative of institutional improvement that often distracts from the need to address these deeper, and much more difficult structural issues.

Indeed, as one speaker at the walkout noted, a pipeline to a toxic environment is not equity. Meaningfully addressing these issues will require tackling a broad range of structural problems, from sexism, racism and discrimination to pay equity, arbitration, and representation, in ways that understand the fundamental issues of access to power.

It’s time for research on gender and race in AI to move beyond considering whether AI systems meet narrow technical definitions of ‘fairness.’ We need to ask deeper, more complex questions: Who is in the room when these technologies are created, and which assumptions and worldviews are embedded in this process? How does our identity shape our experiences of AI systems? In what ways do these systems formalize, classify, and amplify rigid and problematic definitions of gender and race? We share some examples of important studies that tackle these questions below — and we have new research publications coming out to contribute to this literature.

AI systems are playing a growing role in shaping our lives and our access to power and resources. It’s critical that we gain a clearer view into how these systems are constructed, and how they are experienced differently by members of society. There is a growing movement among tech workers to take action against inequality and abuse of power: more research into the social implications of the field can play a valuable role in extending and informing these efforts.

We look forward to sharing our own findings in the coming months, and in the meantime, we thought we’d share some of the things we’ve been reading of late:


  1. AI Now co-founder Meredith Whittaker was one of the organizers of the walkout.
  2. https://medium.com/@GoogleWalkout/googlewalkout-update-collective-action-works-but-we-need-to-keep-working-b17f673ad513
  3. We know little about how this system worked, the measures the company used to try to fix it, or how this impacted the company’s hiring decisions during the time recruiters did look at the recommendations it generated — this opacity is itself part of the problem, as it inhibits independent research.

Research Areas