Illustration by Somnath Bhatt

The origins, implications, and translations of justice in Arabic

A guest post by Lujain Ibrahim. Lujain is a Schwarzman Scholar studying for a Master’s degree in Global Affairs with concentrations in technology, policy, and education at Tsinghua University in Beijing, China. Twitter: @lujainmibrahim

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

Over 300 million people speak Arabic, making it the 5th most spoken language in the world. Yet, language is in no way a static or neutral medium of communication. As it shapes discourse, hype, and criticism, it continues to be a crucial arbiter and proxy for social and technical change. Public attitudes towards political, social, legal, and cultural issues, which encompass the development, deployment, and regulation of emerging technologies, traffic and change through language. Being a fundamental aspect of cultural and political identity, the origins, implications, and translations of language are crucial for debates around AI fairness and justice.

Against this backdrop, I take a step back to before fairness and justice became articulated in AI and otherwise through predominantly Western philosophy and jurisprudence, and introduce the Arabic etymological and lexical definition of “justice” and other terms in its purview. I explore how these definitions challenge, expand on, and align with current algorithmic reasoning and interpretations of fairness and justice in critical AI discourse. While mainstream arguments around algorithmic fairness/justice tend to center it as a measurable “after-thought,” I argue that the terminology for justice in Arabic implores us to focus on a more holistic interpretation that incorporates considerations of power, context, and truth.

Academics, technologists, media, and the general public have all expressed concerns surrounding AI systems’ differential treatment of marginalized groups. Critical AI scholars often present “algorithmic fairness” as the solution to these concerns, largely defining fairness through non-discrimination criteria like calibration, predictive parity, and more. However, answering how to prevent this kind of differential treatment has given rise to another more fundamental question: what does it mean for AI systems to be fair or just, particularly in an unfair and unjust world?


There is no straightforward answer to this question, but to begin working through an answer, we can take a look at the negative definition of justice in Arabic. In other words, examine what conditions must be met to ensure a just outcomeor process is achieved. The negative definition of justice in Arabic expands on the negative definition of justice in English in a critical way. Unlike in English, the opposite of adalah (justice) in Arabic is not simply thulm (injustice), but rather thulm (injustice) and jawr (oppression). Thus, with the Arabic definition, for justice to be achieved, both injustice and oppression must be absent. While injustice (in English) is defined as the “absence of justice,” oppression is defined as an “unjust or cruel exercise of authority or power.” The inclusion of oppression dictates a much-needed emphasis on the uneven distribution of power when we conceptualize AI fairness and justice. This pushes us to spotlight questions on who gets to define and influence the goal, the objective function, and the decision space of a system? And how are power and control assigned in that process?

Over the years, several scholars and activists have pointed out the need for more expansive definitions of fairness — ones that go beyond the simple absence of discrimination within algorithms, and incorporate political economyperspectives and/or and analysis of power. While a power analysis is not always included in the multiplicity of approaches to algorithmic fairness/justice, it is embedded in the (negative) definition of justice in Arabic: justice may only be achieved in the absence of both injustice and oppression.

Justice as a verb

Other Arabic verbs for and translations of justice are also useful to this inquiry. For example, the term adel (to be just) in Arabic means “to resemble” or “to be equal to,” whereas the term addala means “to make right” or “to rectify something.” Thus, while the act of being “just” incorporates equality and parity, it also alludes to a corrective element. And one cannot “correct” nor “rectify” an unfair or unjust situation should they disregard context. In algorithmic justice, these definitions underscore how important it is to shift the focus in algorithmic fairness/justice from the technical characteristics of AI systems to their greater sociotechnical embeddings. Critical AI scholars have argued that this shift to a sociotechnical frame requires a careful examination of social context, and a reevaluation of the roles algorithms can and should play in effecting positive social change.

After all, algorithmic systems cannot be separated from the complex and flawed settings they are situated within. In fact, algorithmically “fair” systems often still fail to address social injustice, and either leave the unfair status quo unchallenged or even worse, legitimize social injustice. Indeed, the term thulm (injustice) in Arabic is defined as “placing a thing or an object in its wrong place,” where “misplacing” an “object” also includes placing it in the wrong quantity (too much or too little) and/or at the wrong time (too early or too late). This definition further imposes the importance of context: “fair” systems that are “misplaced” may still lead to unfair and unjust outcomes.

To be just

Finally, I present two common Arabic and specifically Islamic uses of the verb “to be just.” The first is using it to mean “abandoning an action” or “backtracking from a decision” (i.e. thinking deeply about an action until you reach the point of deciding not to commit it). The second is the Quranic usage to mean speaking truth: “And when you speak, be just;” (Surat Al-Inam, verse 152). In algorithmic justice, I believe these two examples implore us to consider resisting technological solutionism — the idea that there is a technological fix to all of society’s problems — as a necessary ingredient and prerequisite for algorithmic justice. In other words, to be just, we must meticulously and honestly inspect the problems we set off to solve with algorithmic interventions and ponder on whether the solutions should be algorithmic interventions at all. These steps precede the processes of designing and building systems, be they fair or unfair, yet are crucial in ensuring that algorithmic systems are (a) not used in place of essential social and political change, (b) not built for inherently oppressive applications like occupation and war, and that © the deployment of these oppressive algorithms is not legitimized by altruistically presenting them as “fair” or “just” using a rather narrow conceptualization of these terms.

The many facets of the term “justice” in the Arabic language expand on the current conceptions of AI justice in various ways, only some of which are outlined in this piece. Other lexical explorations, like Yung Au’s essay analyzing the complexity of Cantonese terms used to describe AI and automation, further exemplify the complexity and elasticity that makes language so pivotal. Examining the nuances of justice and fairness linguistically is an insightful exercise not just within each language but also via linguistic and contextual comparison, as it can be an important vessel to interrogate widely accepted ideas and shape algorithmic imaginaries in a world where technology is rarely locally/regionally confined. For instance, it may elucidate how nuances in definitions translate in multilingual legal and technical AI efforts — e.g., the different fairness notions in the many EU language translations of the GDPR.

As the technology ecosystem blossoms in the Arabic-speaking world, I hope that these lexical explorations, which encode concepts of cultural and historical significance, underpin efforts in this space. And that the emergent discourse on algorithmic justice is one that is attentive to power, context, and truth — moving us to a clearer and more effective remedy to algorithmic injustice, and ultimately, perhaps even social injustice.