Illustrations by Somnath Bhatt

We need to generate narratives that can both offer perspectives from other places but also offer crucial anticipatory knowledge and strategy that can help ensure the incursion of AI does not follow the path of social control and consolidation of decision making power that is marking its proliferation in the West.


Critical thinking in AI has moved beyond examining specific features and biases of discrete AI models and technical components to recognize the critical importance of the racial, political, gendered and institutional legacies that shape real-world AI systems as well as the material contexts and communities that are most vulnerable to the harms and failures of AI systems. National and transnational, political, economic and racial contexts of production and deployment are integral to the questions of how such AI will operate, and for whose benefit. However, much of this thinking currently originates in the global North and inadvertently takes the infrastructural and regulatory landscapes and histories of Euro-America as the baseline for critical AI thinking.

Simultaneously, even as critical AI discourse garners more mainstream attention, it is largely used as a catchall term for a narrow set of technologies such as machine learning, and other algorithmic systems that produce predictions and determinations. This draws boundaries around “what is and is not ‘AI’” in a way that artificially narrows our attention, excluding many analog and digital forms of social and political classifications that are already prevalent, and that constitute the foundations on which many so-called “AI systems” rely on. These geographical as well as conceptual and imaginative limits produce their own silences and impossibilities in terms of what gets counted and is considered relevant and important to critical AI thinking.

There is an urgent need then, to expand and revise critical AI thinking by attending to global racial histories, diverse queer movements, struggles of caste and tribe communities and their specific place-based demands especially outside of the West. We need a more expansive stocktaking of race and gender relations as well as the variegated forms of marginalization globally. It is not sufficient to say that caste identities take on a racial function when they enter the database but also that caste-based practices are highly variegated across regions in South Asia. Similarly, a nod to queer persons in critical AI thinking is not enough but there is a need for a deeper investigation and documentation of how technologies will get or are getting entangled with the specific histories of criminalization and marginalization of queer bodies across the world.

Further, the conception, funding and deployment of AI systems in the global South is far less uniform and totalizing given that infrastructure projects and datafication schemes are constantly being made and unmade. In post-colonial societies governance infrastructures and legal frameworks are also shaped by colonial legacies. The ensuing struggles over legislation and record-keeping practices, problematic efforts at digitalization and complex dependencies on foreign enterprise — have all resulted in unreliable, dynamic and highly contested practices of data governance.

Due to all these reasons, it cannot be assumed that terms like fairness, transparency and accountability carry the same meanings or even meaningful import in AI, Ethics and Governance discussions in the global South.

The demands and concerns for progressive AI futures globally may not be adequately reflected in the keywords and concerns foregrounded by the current critical AI discourse.

  • How then, if at all, should keywords like Fairness, Transparency, Accountability, Bias, Auditing, Ghost Labor, Explainability, Social Good etc. be redefined or even retooled to reflect demands that non-Western communities make of technological futures? What terms or formulations are missing from the discourse?
  • How do we bring local political, administrative, historical, ecological and material contexts and practices across the world to bear upon the current understanding of the values and ideals mentioned above, and perturb the “common-sense” solutions proposed to achieve them? What new vocabulary might we need to describe those demands?
  • Should fairness, accountability and explainability signify other or more demands when anchored to specific global South communities and contexts? What are we seeking to ensure is “fair”, and about what do we need to be “transparent”?

Call for Contributors

Taking inspiration from Raymond Williams’ original Keywords project and more recently from media scholar Maya Ganesh’s work (A is for Another: A Dictionary of AI), we invite short essays or posts (1000–1500 words) that take up a dominant concept, metaphor or keyword along which critical AI thinking is currently undertaken. We invite reflections and responses to research themes and keywords foregrounded in critical AI conferences such as FaCCT but also writing that offers new terms or formulations absent from the discourse. We ask authors to respond, challenge, situate or re-frame the concept or metaphor they choose by drawing on archival, ethnographic, journalistic, critical quantitative research focusing on the global South or transnational contexts.

We encourage academics, activists, journalists and others, especially early career or junior colleagues researching global AI histories, materialities and futures to apply with an outline of 2 ideas/pitches for the essays they would like to contribute . While submissions are not restricted to global South topics or geographies, entries that foreground minoritized and underrepresented contexts and communities within the global South and North will be given preference. Comparative and multi-vocal essays, interviews with invisibilized stakeholders of the global AI economy and experimental formats are also welcome.

We are only able to offer financial support to a limited number of contributors, with preference given to junior scholars and contributors from minoritized communities. All contributors will be provided feedback and editorial support prior to publication. We also hope to publish additional framing essays from leading thinkers in the space to provide inspiration and direction to this effort.

In order to contribute to the New AI lexicon project please fill this form and attach a document containing two (or more) 300 word pitches for the essays you would like to contribute. We are accepting contributions on a rolling basis between January and March, and aim to publish before June 2021. Please mention if the research is already done or is in progress and what kind of resources you might need in order to complete the piece. Please provide your name, affiliation (if any) and links to your previous writings (if any).

Authored by Noopur Raval and Amba Kak, with assistance from Alejandro Calcaño.

Original illustration by Somnath Bhatt. Somnath is an artist and a designer who lives and works between the USA and India. Seeking the new in the old, and the old in the new is his favorite form of making.