Illustration by Somnath Bhatt

Existentially Adverse Outcomes from AI in the Global South

A guest post by Aditya Singh and Daniel Vale. Aditya is a PhD candidate with the Centre for Technomoral Futures at the Edinburgh Futures Institute, and the Global Academy of Agriculture and Food Security. Daniel is an external PhD candidate at eLaw — Center for Law and Digital Technologies, Leiden University, the Netherlands.

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.


The rise of AI has brought attention to the ‘existential risks’ it may pose. Scholars have typically described AI-led existential risks in terms of dystopian futures where super-intelligent AIs arise and turn against humans; enslaving them or rendering them obsolete; reigning physical terrors via fully autonomous weapons (Adams, 2016; Ord, 2020; Bostrom, 2014). In this essay, we interrogate this discourse, challenging it against its North-centricity. We illustrate how North-led dynamics of extraction, at the expense of the Global South, may already be precipitating existential-level hazards for Southern populations. In particular, populations in the Global South face economic, military, and environmental risks, in the present term, of equivalent magnitude and moral significance to those predicted by the contemporary discourse on AI existential risks.

Existential risks are typically imagined as risks that affect all future generations of humanity, and are terminal in their intensity. They could destroy humanity entirely, reduce the quality of life permanently or drastically, limit humanity reaching ‘technological maturity,’ and prevent any chance of the recovery of civilization (Bostrom, 2019; Posner, 2004; Liu et al, 2018). They jeopardize the survival of humanity as a whole. Bostrom (2014) for instance, speculates as to whether superintelligent AI would align with humanity’s best interests and welfare, or if we can have any conception of their thought process at all, “as human thought processes are to cockroaches.” Superintelligent AI programmed towards seemingly harmless ends, like manufacturing paperclips, or winning chess games, could ‘breakaway’ if not programmed appropriately, if they start to acquire resources to accomplish their goals. Popular figures in industry and academia, like Elon Musk and Stephen Hawking, have amplified these narratives in public discourse, cautioning against the risk to humanity from the successful creation of superintelligent AI (Future of Life Institute, 2015).

This imagination of existential risk from AI prompts research agendas to evaluate the feasibility of ‘runaway’ artificial intelligence, estimate timelines for when superintelligence may arise, and explore ways in which superintelligent AI align with human values and goals (Chalmers, 2010; Sotala, 2017; Goertzel & Pitt, 2014). The (largely North led) agenda setting has material implications in terms of which problems are studied, with the limited funding and resources available. However, not only are southern populations more vulnerable to ‘existential’ risk (in part because of their post-colonial contexts), but North-led development of AI perpetuates extractive patterns that exacerbate these vulnerabilities.

Popular imaginations of existential risks lean towards one-hit knock-outs, like asteroids and nuclear apocalypses; but these are truly only a subset of all existential risks. A fuller conception of these risks would include ‘existentially adverse outcomes’ (and not simply the rise of existential hazards, like superintelligence or asteroids) that arise from other events, or from the interaction of indirect, socially, and culturally mediated factors which might cumulatively pose existential risk (Liu et al, 2020). This assessment acknowledges that different societies have differing degrees of vulnerabilities and exposure to threats and hazards. The vulnerabilities, and the degree of exposure, and not just the nature of the threat determine what constitutes an existential risk. Current conceptions of AI existential risk therefore under-appreciate the relevance of unequal exposures and vulnerabilities between the North and South, which can give rise to existential risks not originating from world-destroying superintelligent AI.

While many argue that there should be more focus on future existential risks, we propose that the risks posed by AI to the Global South, in the present and near-future, merit greater concern. So, how is AI generating existential hazards for vulnerable populations in the Global South?

Military risks: The first and most apparent example is in the use of (partially) lethal autonomous weapons systems (“LAWS”), which are predominantly operated by the Global North against Global South populations. The use of LAWS by both Israel and the United States throughout regions in the Middle East, Northern Africa, as well as the Horn of Africa are well documented (Garcia, 2019). Likewise, LAWS and their operators have well-documented error rates, and result in killings of indigenous populations (Garcia, 2019). Most recently, a report from the UN Panel of Experts on Libya noted that LAWS with ‘fire, forget, and find’ capabilities were deployed in Libya against retreating forces and logistic convoys (UN, 2021; Cramer, 2021).

While militaries typically justify their use of LAWS by arguing that they reduce civilian casualties, scholarly reports have documented resultant civilian casualties in the Middle East and Horn of Africa since at least 2012 (Center for Civilians in Conflict & Human Rights Clinic at Columbia Law School, 2012). Since then, these numbers have likely escalated (Gracia, 2019; Bergen et al, 2019). The rise of casualties from LAWS have spurred international campaigns advocating their abolishment, for example the Campaign to Stop Killer Robots (first launched in April 2013) (Garcia, 2019). The use of LAWS in the Global South is indicative of a larger issue: the pioneering, engineering and development of dystopian-like technology systems in the Global North with their subsequent application in the Global South (Amnesty International, 2020). The cumulative result is that AI weapon systems, engineered in the North, but tested and deployed in the Global South, present apparent existential hazards to Global South actors through their continued use.

Economic Risks: The collapse of off-shoring practices by Global North companies, due to the diminishing cost of the use of AIs and robotics in the Global North manufacturing sector (whether for better or worse), will have dramatic effects on the Global South economies and, in turn, the socio-economic prosperity and survival of local populations (Badiuzzaman & Rafiquzzaman, 2020; Arun, 2019; Elliot, 2019). Given their post-colonial contexts, these societies may not have sufficient social safety and institutional resilience to mitigate these disruptions. An example of this is the flurry of commercial media debate about the uses of chatbots in areas like the Philippines, which are likely to replace current workforces (Reed, J., Ruehl, M., & Parkin, B., 2020). It is commercially reported that chat and AI bots were used “less than 10% of the time [before the Covid pandemic], but that’s climbed to almost 25% and could reach 35% by year end [2021]” (Bloomberg New, 2021). Such migrations will likely leave large portions of local populations without a source of income and, in turn, threaten the immediacy of their survival.

Environmental Risks: Perhaps most notable is the environmental impact of AI on the Global South, which may suffer most from the immediate cost. The disproportionate extraction of resources from the resource rich Global South to sustain AI and the supply chains it fosters is increasingly being documented (Crawford & Joler, 2018; Kak, 2020). As Dauvergne notes, “AI is going to spur wasteful consumption, natural resource extraction, and the production of electronic waste” (Dauvergne, 2021). The environmental devastation from AI will likely disproportionately affect Southern populations on account of the comparative underfunding of governmental agencies and lax regulations and protections (Arun, 2019). In addition, there are existing asymmetries in waste management practices between the Global North and South, in which the latter absorbs excesses of the former. The escalation of e-waste from AI will only exacerbate this imbalance (Amankwaa, 2013; Cotta, 2020; Murali & Mishra, 2021). The impact of such practices may further spur environmental migration and social disruption, amongst other consequences. This, in turn, places the environmental existential threat of AI more immediately with Global South actors.


If the ‘South’ is a metaphor for human suffering caused by capitalism and colonialism, (Santos, 2016; Arun, 2019) AI existential risk discourse remains uncritical of, and obscures the degree to which AI and automation is led by the impulse of capital accumulation. The development of AI, potentially (or inevitably) towards superintelligence, is presented as progressing outside of human control (Asp, 2019). In truth, the social and ecological destruction that is attributed to potential superintelligent AI are the premise of the underlying logics of capital accumulation that continue to impoverish Southern populations.

Bostrom imagines loss from existential risk as the loss of future value, or the inability of humanity to reach ‘technological maturity,’ envisioned in terms of the ability to colonize other planets. This conception of harm is then simply the lost ability to continually accumulate capital based on the current patterns of human and ecological resource exhaustion. Yet the development and deployment of AI systems in the present and the short term is led by these forces disproportionately impoverishing the Global South.

Bostrom’s typology of existential risks itself imagines ‘flawed realization’ as an existentially adverse outcome. ‘Flawed realization’ is a situation where humanity reaches ‘technological maturity’ in a way that is ‘dismally and irremediably flawed.’ We propose that humanity evading dystopian super intelligence, (or even realizing utopian superintelligence) at the expense of ecological collapse and widespread misery in vulnerable populations, should likely fit that definition.

Some existential risks (like superintelligence) appear to attract more interest, because of their inherent ‘sexiness’ (Kuhlemann, 2019). Risks may be ‘sexy’ when easy to visualise; epistemically neat (it is easy to identify the academic fields best placed to understand them, for instance astronomers for asteroids); have sudden and unpredictable onset; and are technology related (often premised on flattering assumptions about human ingenuity, only poorly harnessed, while also centering technological solutions).

The real hazards from AI facing Southern populations today are neither neat nor sexy. They require confronting the global dynamics of capital, and the past and contemporary asymmetries between the North and South. Far-off AI apocalypses may capture the imagination, but distract from the real military, economic and environmental hazards experienced by the South.


References

Adams, T. (16 C.E.). Artificial intelligence: ‘We’re like children playing with a bomb’ | Artificial intelligence (AI) | The Guardian. The Observer. https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine

Amankwaa, E. F. (2013). Livelihoods in risk: Exploring health and environmental implications of e-waste recycling as a livelihood strategy in Ghana. Journal of Modern African Studies, 51(4), 551–575. https://doi.org/10.1017/S0022278X1300058X

Amnesty International (2021) New EU Dual Use Regulation agreement ‘a missed opportunity’ to stop exports of surveillance tools to repressive regimes. Available at: https://www.amnesty.org/en/latest/news/2021/03/new-eu-dual-use-regulation-agreement-a-missed-opportunity-to-stop-exports-of-surveillance-tools-to-repressive-regimes/

Amnesty International (2021) Out of Control: Failing EU Law for Digital Surveillance Export | Report. Available at: https://www.amnesty.org/en/latest/news/2021/0their3/new-eu-dual-use-regulation-agreement-a-missed-opportunity-to-stop-exports-of-surveillance-tools-to-repressive-regimes/

Artificial Intelligence, Chatbots Threaten Call-Center Industry | Bloomberg. Available at: https://www.bloomberg.com/news/articles/2021-03-16/artificial-intelligence-chatbots-threaten-call-center-industry-human-operators

Arun, C. (2019), AI and the Global South: Designing for Other Worlds. Forthcoming in Markus D. Dubber, Frank Pasquale, and Sunit Das (eds.), The Oxford Handbook of Ethics of AI, Oxford University Press, Available at SSRN: https://ssrn.com/abstract=3403010

Asp, K. (2019). Autonomy of Artificial Intelligence, Ecology, and Existential Risk: A Critique. In Cyborg Futures(pp. 63–88). Springer International Publishing. https://doi.org/10.1007/978-3-030-21836-2_4

Badiuzzaman, M., & Rafiquzzaman, M. (2020). Automation and Robotics: A Review of Potential Threat on Unskilled and Lower Skilled Labour Unemployment in Highly Populated Countries. International Business Management, 14(1). Available at: https://www.researchgate.net/profile/Md-Badiuzzaman-2/publication/344327159_Automation_and_Robotics_A_Review_of_Potential_Threat_on_Unskilled_and_Lower_Skilled_Labour_Unemployment_in_Highly_Populated_Countries/links/5f685d1d299bf1b53ee76b5f/Automation-and-Robotics-A-Review-of-Potential-Threat-on-Unskilled-and-Lower-Skilled-Labour-Unemployment-in-Highly-Populated-Countries.pdf

Bergen, P., Sterman, D. & Salyk-Virk, M. (2020). The Drone War in Libya. Available at: https://www.newamerica.org/international-security/reports/americas-counterterrorism-wars/the-drone-war-in-libya/

Bloomberg News (2021) Empathetic Robots Are Killing Off the World’s Call-Center Industry | Bloomberg News. Available at: https://www.bloomberg.com/news/articles/2021-03-16/artificial-intelligence-chatbots-threaten-call-center-industry-human-operators

Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1), 15–31. https://doi.org/10.1111/1758-5899.12002

Center for Civilians in Conflict & Human Rights Clinic at Columbia Law School (2012) The Civilian Impact of Drones: Unexamined Costs, Unanswered Questions | Report. Available at: https://civiliansinconflict.org/wp-content/uploads/2017/09/The_Civilian_Impact_of_Drones_w_cover.pdf

Chalmers, D. J. (2010). The Singularity: A Philosophical Analysis. Journal of Consciousness Studies, 17(9–10), 9–10.

Cotta, B. (2020). What goes around, comes around? Access and allocation problems in Global North–South waste trade. International Environmental Agreements: Politics, Law and Economics, 20(2), 255–269. https://doi.org/10.1007/s10784-020-09479-3

Cramer, E. (2021) A.I. Drone May Have Acted on its Own in Attacking Fighters | The New York Times. Available at: https://www.nytimes.com/2021/06/03/world/africa/libya-drone.html

Crawford, K. & Joler, V. (2018) Anatomy of an AI System: The Amazon Echo As An Anatomical Map of Human Labor, Data and Planetary Resources. AI Now Institute and Share Lab. Available at: https://anatomyof.ai

Dauvergne, P (2021) The globalization of artificial intelligence: consequences for the politics of environmentalism, Globalizations, 18:2, 285–299, DOI: 10.1080/14747731.2020.1785670

Elliot, A. (2019) The Culture of AI: Everyday Life and the Digital Revolution. Routledge: London. Print.

Garcia, E. (2019) The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South. Available at SSRN: https://ssrn.com/abstract=3452323

Goertzel, B., & Pitt, J. (2014). Nine Ways to Bias Open-Source Artificial General Intelligence Toward Friendliness. In Intelligence Unbound (pp. 61–89). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118736302.ch4

Kak, A. (2020). “the Global South is everywhere, but also always somewhere”: National policy narratives & AI Justice. AIES 2020 — Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 307–312. https://doi.org/10.1145/3375627.3375859

Kuhlemann, K. (2019). Complexity, creeping normalcy and conceit: sexy and unsexy catastrophic risks. Foresight, 21(1), 35–52. https://doi.org/10.1108/FS-05-2018-0047T

Liu, H. Y., Lauta, K., & Maas, M. (2020). Apocalypse Now? Journal of International Humanitarian Legal Studies, 11(2), 295–310. https://doi.org/10.1163/18781527-01102004

Liu, H. Y., Lauta, K. C., & Maas, M. M. (2018). Governing Boring Apocalypses: A new typology of existential vulnerabilities and exposures for existential risk research. Futures, 102, 6–19. https://doi.org/10.1016/j.futures.2018.04.009

Murali, A & Mishra, P (2021) How the Phone you Chucked is Killing Seelampur: Discover how you contribute to India’s ever-growing pollution and health problems | Factordaily.com. Available at: https://factordaily.com/ewaste/the-dark-side-of-indias-digital-underbelly/

Ord. T., (2020) The Precipice: Existential Risk and the Future of Humanity. Hachette Books: New York. Print.

Reed, J., Ruehl, M., & Parkin, B. (2020) Coronavirus: will call centre workers lose their ‘voice’ to AI? | The Financial Times. Available at: https://www.ft.com/content/990e89de-83e9-11ea-b555-37a289098206

Santos, B. de S. (2016). Epistemologies of the South : justice against epistemicide. Routledge.

Sotala, K. (2017). How feasible is the rapid development of artificial superintelligence? Physica Scripta, 92(11), 113001. https://doi.org/10.1088/1402-4896/aa90e8

Sotala, K., & Gloor, L. (2017). Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. Informatica (Ljubljana), 41(4), 389–400.

Security Council Letter dated 8 March 2021 from the Panel of Experts on Libya established pursuant to resolution 1973 (2011) addressed to the President of the Security Council S/2021/229 (8 March 2021). Available at: https://undocs.org/S/2021/229