Illustration by Somnath Bhatt

Can an algorithmic system produce dissent?

A guest post by Sareeta Amrute. Sareeta is an anthropologist exploring data, race, caste, and capitalism in global South Asia, Europe, and the United States. She is Associate Professor at the University of Washington, and Director of Research at Data & Society. Twitter: @SareetaAmrute

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

Dissent is a critical democratic function. Whether through critique, demands, protest, or resistance, dissenters play a central role in speaking out against dominant social structures. Through dissent, we expand the realm of debate and expand the actors included in collective decision-making. Dissensus, rather than consensus, is at the heart of democratic practice (Rancière 2010).

As algorithms increasingly overlay human actors in powerful areas of decision making — such as courtrooms, hiring, transportation, farming, or government benefits eligibility — how will we dissent? Algorithms can unfairly limit opportunities, restrict services, and produce digital data discrimination, often as “black boxes” with little to no transparency. So, how can we make algorithmic systems more democratic, so that collective decision-making can be expansive rather than restrictive? This essay explores three intersections of AI and dissent: dissenting from algorithmic decisions, protecting political dissent, and fostering the sensibility to dissent from a given system.

Dissenting from Algorithmic Decisions:

Algorithmic and AI accountability mechanisms often include audits and impact assessments. Audits evaluate the kinds of predictive results that AI systems produce and highlight their disparate or discriminatory impacts on particular groups. These accountability mechanisms potentially examine the context in which an algorithmic system is deployed, and build a set of actors and rules to determine what possible harms might result from a given algorithmic system (Moss, Watkins, Singh, Elish, Metcalf, 2021). In this sense, accountability describes all the participants and processes that have a stake in a given algorithmic system. Yet, most current accountability processes rely on company largesse to conduct and act on audits of their algorithmic systems. These audits are often conducted either internally or by a narrow set of external experts who decide what communities and questions are included in review, thereby limiting the necessary participants and relevant contexts for holding algorithms accountable.

These realities limit the current mechanisms to hold algorithmic systems accountable to what companies and government agencies choose to publicize, or to what independent auditors can get journalists to scrutinize. While these mechanisms may be improved through standard setting for algorithmic audits, accountability curtails dissent by limiting objections to the paradigms of acceptance, adjustment, and informed refusal from an already-operating algorithmic system (Benjamin 2019, Simpson 2014).

Accountability methods can produce dissent from a particular decision, or from the use of algorithmic decision-making, if actors hold the system to account and decide that a given system should be temporarily or permanently stopped. For example, the bans on facial recognition technologies in more than a dozen North American cities, including San Francisco, Brookline, Sommerville, Northampton, and Cambridge, demonstrates the ‘informed refusal’ of algorithmic governance. While bans and moratoriums impede the expansion of technologies already considered hazardous, such mechanisms make no claims on how a set of issues comes under the heading of accountability. In other words, algorithmic accountability does not ask: what makes a system, a set of concerns, or particular harms eligible for accountability mechanisms? Accountability as it is framed in policy debates does not question where we derive our impulse to hold a system to account, especially for algorithmic systems that ply their trade anonymously, obscuring what the targets of resistance should be and what forms dissent should take. To address these concerns, we need to move from the more circumscribed arena of algorithmic accountability, to the larger question of how algorithmic systems shape our capacity to dissent.

While we most often think about dissent from a given algorithmic system, we rarely address the question of the effects that algorithmic systems have on the capacity to dissent (Chari 2015). Beyond targeted interventions to identify harms produced through algorithms, political dissenters widen the imagination of who and what should be included in common decision-making, and broaden our conceptions of how democratic practice should be organized. This vital democratic function both intersects with and enlarges conventional data politics. Conventional data activism around caste and race, for example, might focus on how caste-based hate speech circulates without sanction on social media platforms, instigating violent and deadly attacks against oppressed-caste families and individuals (Shanmugavelan 2020), or on how predictive policing correlates data from different algorithmic systems such that the simple fact of being Black becomes proxy for being criminal (Brayne 2020, Jefferson 2020).

Data activists who work to curb online hate speech and root out predictive policing draw on a much larger domain of thinking about the problems of racism and casteism. Such thinking is often expressed through terms like ‘abolition’ and ‘annihilation,’ cognate concepts stemming from anti-racist and anti-caste thought rooted in the writings of Angela Davis, Savitribai Phule, W.E.B. Du Bois, B.R. Ambedkar, Ruth Wilson Gilmore, Suraj Yengde, and Yashica Dutt (Amrute 2020). These broader modes of dissent from who is exposed to institutional violence and how we organize safety require protection, especially because dissenters often use social media platforms to broadcast their messages and open online tools to organize their social movements (Jackson, Bailey, and Wells 2020). These platforms and tools are governed by algorithms that both help spread and organize messages of dissent and expose dissenters to surveillance, arrest, and harassment (Richardson 2020).

Protecting Political Dissent:

Organizers in the Movement for Black Lives and in anti-caste movements use encrypted as well as public platforms to strategize. While encryption offers some protection from surveillance, it is not perfect. Organizers move across connected devices, software systems, and platforms, which each present vulnerabilities due to system design and the tendency to devolve responsibility for security onto end users.

Consider the recent arrest of activists in India who contributed to a social media toolkit in support of the Farmers Protest. This toolkit, constructed by a human rights group in Vancouver, Canada, enabled police in India to arrest activists by the simple fact that they had a shared Google Doc with the Canadian activists. They had edited the document publicly, allowing their contributions and IP addresses to be sourced. This example shows how vulnerable activists are given the current predilection of technology companies to downplay security as a design concern in favor of invisible technologies that “just work” (Vagle 2020). It also demonstrates that these vulnerabilities are not shared equally, but instead track to social position and geography.

In the United States, the federal government haphazardly regulates the privacy-protecting claims technology companies make. This has led legal scholars to conclude that privacy-protection policies for apps like Snapchat and WhatsApp are mostly an advertising strategy, which may produce a false sense of security among communities who use them (McNealy and Shoenberger 2016). What is more, technology companies like Apple and Google make claims on the basis of protecting individual activities, while dissent as a political practice is collective. Such privacy-protecting measures derive their logic from an individual’s right to interiority free from outside influence. While such a right undergirds an individual’s ability to reflect on and act on social relations, such frameworks have trouble recognizing modes of common dissent (Amrute 2019). At present, collective practices of dissent produce unevenly distributed risks. When algorithmic systems act as open surveillance systems, sharing makes communities of dissenters vulnerable. Risk is not shared collectively; it is inequitably distributed across geographies and communities.

Even with online safety features conceived to protect the interior space for individual dissent, these are inadequate to the distributed risk of political dissent and to the uneven effects of taking on this risk. Protecting dissent primarily through individual choice transfers responsibility for safeguarding dissent to the shoulders of activists who respond to crises and coordinate across multiple locations and levels of knowledge around security practices.

Some readers may object that protecting communities who dissent also shelters right-wing and totalitarian movements seeking to limit rather than widen democratic participation, yet democratic and totalitarian movements are not equally supported by techno-social regimes. While protecting democratic dissent also provides cover for right-wing organizing, anti-democratic movements ally with systems of power and receive the support of these systems through neglect. As cases from the high number of internet shutdown in Kashmir under the government of India to the use of social media surveillance of Black Lives Matter protestors show, algorithmic systems have been designed or adapted to make it easier for corporations in cooperation with state governments to surveil and suppress minority communication (Canella 2018).

Protecting democratic dissenters is not a feature of these technologies, even while these same technologies contain many back and front doors that powerful governments and violent majorities use to surveil and control minoritized populations. While the aesthetics and tactics of social movements often look similar and borrow from one another, algorithmic surveillance systems police minoritized groups and discount those that support current arrangements of power. As a result, the political-economy of AI systems tends toward exposing dissensus and protecting consensus. Thinking about protecting those who keep alight the flame of democratic dissent requires us to move our attention away from narrow regimes of technocratic practice and toward interventions that widen our imagination of who and what needs to be considered when making decisions in common.

Fostering the Sensibility to Dissent from a Given System:

Algorithmic systems might be redesigned to broaden rather than narrow perception. Yet current evidence suggests the opposite: from Netflix nudges to news bubbles and YouTube radicalization, algorithmic systems are designed to show us what these systems think we want to see, read, and listen to. Within these systems, the chances that we will be exposed to content that challenges our sense of who is entitled to make decisions for the public are low. Historical data is used to guide what we know and how we feel rather than challenge consensus.

Algorithms currently work to serve personalized content and surveil in order to do so. By further narrowing our realities, we are less able to connect with others and build dissensus. The urge to protect political dissenters is foreclosed within such a reality in which surveillance seems to serve individual desire. To expand the realm of algorithmic possibilities, we need to look to the work of such artists such Stephanie Dinkins and micha cárdenas, who create experiential work that introduces narrative and serendipity into the way algorithms interact with people.

In Dinkins’ celebrated work Not the Only One, she creates an African American family narrative through an AI system; when visitors interact with the AI, they receive stories that take them on a journey through fragmented histories of African American life. In micha cárdenas’ Sin Sol/No Sun, players of this augmented reality game experience climate change from the perspective of a trans latinx perspective. Moving through these algorithmic worlds might expand the sense of who should be part of our worlds, from the sites of climate change to the call of the past in the present. Extreme focus on techno-centric notions of justice exemplified by regulation on the one hand, and by technological fixes to the problem of protected communication on the other, emphasize some imaginations of dissent at the cost of more expansive imaginations that might include desires for other democratic futures and goals. In Dinkins’ and cardenas’ work, the focus shifts away from the fact of protecting how people communicate with each other, to protecting the very ability of technical systems to support narratives that create a different perception of who and what ‘belong’ to the question of race, technology, and environment. In Dinkins’ oeuvre Black voices ‘belong’ not primarily as victims but as sources of sometimes cryptic knowledge about the past that we need to unpack. In cardenas’, the subject of climate change belongs in the first instance to the body that simultaneously experiences environmental devastation and the multiple borders of gender and migration.

The current ways that we refuse algorithmic decision-making are extremely limited. By including these three components of dissent in our consideration of AI, we produce momentum toward more meaningful algorithmic systems regulation. At the same time, the boundaries of who and what should be part of this regulation will remain narrow unless we include other forms of dissent, such as the ability to safeguard communications channels for political dissent, and the capacity to recognize the limits of our current formations of power and participation.

Citations

Amrute, Sareeta 2019. “Of Techno-Ethics and Techno-Affects” Feminist Reviewhttps://journals.sagepub.com/doi/full/10.1177/0141778919879744

Amrute, Sareeta 2020. “Racial Violence & Technology: A Conversation with Ruha Benjamin” ABA/CASTAC Invited Lecture. Raising Our Voices, American Anthropological Association, November 12, https://www.youtube.com/watch?v=9J9aRp_5a4s&feature=youtu.be

Benjamin, Ruha 2019. Race After Technology. New York: Polity Press.

Brayne, Sarah 2020. Predict and Surveil. Oxford: Oxford University Press.

cardenas, micha Sin Sol/No Sunhttps://michacardenas.sites.ucsc.edu/sin-sol-no-sun/

Canella, Gino 2018. “Racialized Surveillance: Activist Media and the Policing of Black

Bodies” Communication, Culture & Critique 11(3):378–398.

Chari, Anita 2015. A Political Economy of the Senses: Neoliberalism, Reification, Critique. New York: Columbia University Press.

Dinkins, Stephanie. Not the Only Onehttps://www.stephaniedinkins.com/ntoo.html

Jackson, Sarah, J., Bailey, Moya, and Welles, Brooke Foucault 2020. #HashtagActivism: Networks of Race and Gender Justice. Boston: MIT Press.

Jefferson, Brian 2020. Digitize and Punish. Minneapolis: University of Minnesota Press.

McNealy, Jasmine and Schoenberger, Heather 2016. “Reconsidering Privacy-Promising Technologies” Tulane Journal of Technology and Intellectual Property 19:1–25.

Moss, Emanuel, Watkins, Elizabeth Anne, Singh, Ranjit, Elish, Madeine Clare, Metcalf, Jacob 2021. “Assembling Accountability: Algorithmic Impact Assessments for the Public Interest.” Data & Society Research Institute.

Rancière, Jacques 2010. Dissensus: On Politics and Aesthetics. New York: Continuum.

Richardson, Allissa V. 2020. Bearing Witnesss While Black. London: Oxford University Press.

Shanmugavelan, Murali 2021. “Caste-hate Speech: Addressing hate-speech based on work and descent” International Dalit Solidarity Network.

Simpson, Audra 2014. Mohawk Interruptus. Durham: Duke University Press.

Vagle, Jeffrey L. 2020. “Cybersecurity and Moral Hazard” Stanford Technology Law Review. 23(1): 71–113.