Illustration by Somnath Bhatt

Critically Embracing Human Rights Frameworks in AI

A guest post by Catriona Gray. Catriona is a PhD candidate at the University of Bath’s Centre for Doctoral Training in Accountable, Responsible and Transparent Artificial Intelligence. Her research project aims to trace the adoption of decision-supporting AI technologies within the global refugee regime. She is on Twitter @CatrionaLGray

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.


From the OECD to Google, AI ethical frameworks are everywhere. Yet, for their critics, these non-binding initiatives often fall short as actionable tools, and could in some cases amount to ‘ethics-washing’ (Wagner, 2018). In response, several policy actors have attempted to construct human rights-based frameworks. For example, the Toronto Declaration, drafted by a group of civil society and academic experts in 2018, aimed to apply human rights law and standards to machine learning, with an emphasis on equality and non-discrimination. But these alternatives are not without their own risks, and may even inherit some of the same problems associated with existing normative frameworks. Critical perspectives on human rights, including decolonial critiques, have so far been largely absent from these debates. In light of this, I argue for a more provisional, yet expansive, embrace of human rights in AI discourse.

It is not difficult to see the appeal of international human rights law (IHRL) as a legal and political response to the problems AI and related technologies present. Applying human rights standards to AI in, for example, the field of humanitarian action, as Pizzi et al. (2020) argue, could help construct clearer lines of accountability for when things go wrong. They propose a suite of human rights due diligence tools that could be implemented including: human rights impact assessments (HRIAs), assurance processes for implementing partners, and proactive public engagement. This is reflective of a broader call for human rights-based frameworks and regulatory responses in AI, driven largely by critical lawyers, scholars, and civil society actors.

While existing technical and ethical initiatives are potentially valuable, none has the “broad global consensus or buy-in across stakeholder groups” of IHRL (Donahoe & Metzger, 2019, p. 18). By contrast, IHRL seems to offer an already existing, comprehensive and global set of norms which some suggest can even function as a “much sought-after moral compass to constitute the basis of an AI governance framework” (Smuha, 2020). Though recognising it is no panacea, proponents of a human rights-based approach claim it can offer an organizing framework for the design, development, and deployment of AI and algorithmic systems which can also accommodate other approaches, including technical solutions (Yeung, Howes, & Pogrebna, 2020). Unlike other tools such as algorithmic impact assessments, IHRL promises shared definitions and ways of assessing harm. While the concept of remedy under many existing accountability models tends to be narrowly focused on fixing biased or otherwise harmful operations, effective remedy under IHRL is broader and can ensure harms are not repeated. IHRL also provides a structured framework for mediating and resolving conflicting interests and objectives (McGregor, Murray, & Ng, 2019).

A key challenge for any human rights-based approach is the attribution of responsibilities and duties. While states are the ultimate duty bearers under IHRL, they are required to put in place frameworks to prevent and provide remedies against third-party violations. This is complemented by non-binding UN Guiding Principles on Business and Human Rights (BHR). As part of the BHR agenda, there have been calls for governments to mandate greater use of HRIAs of AI and algorithmic technologies (UNGA, 2018). As a recent study of Facebook’s HRIA in Myanmar following the genocide against the Rohingya shows, however, a number of conditions need to be met for HRIAs to be established as a legitimate and effective tool for AI and algorithmic systems (Latonero & Agarwal, 2021). To offer more than a veneer of accountability, these tools need to be part of broader human rights due diligence processes. They must address not just individual harms but collective and cumulative harms (see Mantelero, 2016). Crucially, they must analyse technologies as sociotechnical systems. In other words, a major task is to construct hybrid knowledge of the technical, the social, and the legal (Valverde, 2003).

Understanding this anticipatory human rights work as a site of knowledge production, and so of power, raises a number of questions. Which forms of expertise are admitted or discarded? Who can occupy the position of ‘expert’? How can lay, popular and tacit knowledges be incorporated? What can human rights law not see? A key problem for human rights is that those who have power — human rights activists and experts who have access to political spaces — are positioned as its sole representatives and agents (Alcoff, 1991; Spivak, 2004).

On a more practical level, over-reliance on professional expertise may limit the success of human rights-based regulatory mechanisms in two main ways. As with technological risk or impact assessments which aim to anticipate unintended consequences, expert-driven HRIA (and human rights-inspired design and development) may prioritize short-term and calculable harmful impacts over more synergistic, unprecedented, and diffuse risks to human rights. Sheila Jasanoff sums up this concern, observing that “experts’ imaginations are often circumscribed by the very nature of their expertise” (2016, p. 250). Rather than simply replacing one technocratic vision with another, we should ask how these initiatives could recognise the interests and views of all people affected by AI — including those whose legal consciousness does not feature human rights discourse. A second limitation arises from the enactment of professional expertise which is shaped and constrained by its level of proximity to decision making. Questioning the practice of human rights mainstreaming, Koskenniemi (2010) suggests there is in fact much to be said in favour of human rights (and its experts) staying outside of regular administrative procedures as watchdogs and critics. Bringing experts (including social scientists) on board by no means guarantees better fulfilment of human rights.

As we have seen, authors like Smuha advocate for IHRL as a useful lens and template for AI development and regulation because of the moral certainty and political closure it seems to offer. This quest for consensus and certainty through human rights has, however, been subject to considerable critique within political theory. In Reconstructing Human Rights, Hoover (2016) argues for a situated and agonistic understanding of human rights. According to this view, human rights are both inherently contested and a tool to contest existing understandings of political legitimacy and membership. Rather than seeing it as an unfortunate condition to be overcome, an agonistic approach embraces the conflict that political claims made of human rights can generate. Human rights can be used to make new (and plural) claims on authority, including fundamental changes in the social order. Their actors do so through a distinct form of contestation which draws upon the ambiguous identity of ‘humanity.’ Understood in this way, human rights are a universal and pluralizing ethos which can work to give people more democratic control over their lives.

This approach dovetails with many critical decolonial accounts of human rights. For example, in his essay, On the Coloniality of Human Rights, Nelson Maldonado-Torres draws on Césaire and Fanon to identify a decolonial current within human rights thinking (distinguished from both Eurocentrist and so-called cultural relativist camps) which seeks to ground whatever is universal in humanity in “the very struggles of the colonized in affirming their humanity” (2017, p. 132). Contrary to dominant interpretations, the genealogy of human rights is not singularly European (Barreto, 2013). Similarly, in his consideration of whether human rights can be an “emancipatory script” Santos (2008) argues that such an endeavour cannot involve the simple replacement of a northern epistemology with one from the South. Instead, we need to make human rights an insurgent cosmopolitan project — a transnational coalition of mutual learning which could upend the suppression of knowledges constitutive of Western modernity. For Santos, these insurgent cosmopolitan projects include egalitarian North-South networks of solidarity and cross-cultural dialogues on human dignity. This is far removed from the technocratic work of HRIAs of AI outlined above.

Marking a materialist turn, scholarship has begun to interrogate the concentrations of global power and reliance on the extraction of labour and raw materials underpinning AI (Crawford, 2021). But these material dynamics, brought about by advancements in the capabilities and profusion of AI technologies, must also be understood as entangled with hierarchies of humanity (Wynter, 2003) and a deeply unequal colonial global political economy. A coming test for human rights will be how far they can go not just in regulating AI, but in identifying and dismantling these oppressive structures. How might human rights be used, for example, in a global movement to obstruct AI’s extractive frontiers? Or to contest the very authority of states to deploy (and regulate) racializing and immobilizing biometric surveillance? Instead of forestalling radical transformation and legitimating existing power arrangements, as some would contend (Badiou, 2001), human rights — if embraced as an agonistic and insurgent cosmopolitan project — may offer powerful tools to gain more collective, democratic control over AI’s trajectories.


References

Alcoff, L. (1991). The Problem of Speaking for Others. Cultural Critique(20), 5–32. doi:10.2307/1354221

Badiou, A. (2001). Ethics: an essay on the understanding of evil (P. Hallward, Trans.). London, New York: Verso.

Barreto, J.-M. (2013). Human rights from a third world perspective : critique, history and international law. Newcastle upon Tyne, UK: Cambridge Scholars Publishing.

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. London: Yale University Press.

Donahoe, E., & Metzger, M. M. (2019). Artificial Intelligence and Human Rights. Journal of democracy, 30(2), 115–126. doi:10.1353/jod.2019.0029

Hoover, J. (2016). Reconstructing human rights : a pragmatist and pluralist inquiry in global ethics. Oxford, United Kingdom: Oxford University Press.

Jasanoff, S. (2016). The Ethics of Invention: Technology and the Human Future. New York: WW Norton & Co.

Koskenniemi, M. (2010). Human Rights Mainstreaming as a Strategy for Institutional Power. Humanity: An International Journal of Human Rights, Humanitarianism, and Development, 1(1), 47–58. doi:10.1353/hum.2010.0003

Latonero, M., & Agarwal, A. (2021). Human Rights Impact Assessments for AI: Learning from Facebook’s Failure in Myanmar. Carr Center Discussion Paper Series.

Maldonado-Torres, N. (2017). On the Coloniality of Human Rights. Revista crítica de ciencias sociais, 114(114), 117–136. doi:10.4000/rccs.6793

Mantelero, A. (2016). Personal data for decisional purposes in the age of analytics: From an individual to a collective dimension of data protection. Computer Law & Security Review, 32(2), 238–255. doi:10.1016/j.clsr.2016.01.014

McGregor, L., Murray, D., & Ng, V. (2019). International Human Rights Law as a Framework for Algorithmic Accountability. The International and Comparative Law Quarterly, 68(2), 309–343. doi:10.1017/S0020589319000046

Pizzi, M., Romanoff, M., & Engelhardt, T. (2020). AI for humanitarian action: Human rights and ethics. International Review of the Red Cross 102 145–180.

Santos, B. d. S. (2008). Another Knowledge is Possible: Beyond Northern Epistemologies. London: Verso.

Smuha, N. A. (2020). Beyond a Human Rights-Based Approach to AI Governance: Promise, Pitfalls, Plea. Philosophy & Technology. doi:10.1007/s13347–020–00403-w

Spivak, G. C. (2004). Righting Wrongs. The South Atlantic quarterly, 103(2–3), 523–581. doi:10.1215/00382876–103–2–3–523

Valverde, M. (2003). Law’s dream of a common knowledge. Princeton, N.J., Oxford: Princeton University Press.

Wagner, B. (2018). Ethics as an escape from regulation: From “ethics-washing” to ethics-shopping? In E. Bayamlioğlu, I. Baraliuc, L. Janssens, & M. Hildebrandt (Eds.), Being Profiled: Cogitas Ergo Sum. 10 Years of ‘Profiling the European Citizen‘. (pp. 84–89): Amsterdam University Press.

Wynter, S. (2003). Unsettling the Coloniality of Being/Power/Truth/Freedom: Towards the Human, After Man, Its Overrepresentation — An Argument. CR (East Lansing, Mich.), 3(3), 257–337. doi:10.1353/ncr.2004.0015

Yeung, K., Howes, A., & Pogrebna, G. (2020). AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing. In M. D. Dubber, F. Pasquale, S. Das, K. Yeung, A. Howes, & G. Pogrebna (Eds.), The Oxford Handbook of Ethics of AI (1 ed.): Oxford University Press.