Illustration by Somnath Bhatt

AI for Good or Control? A look at the growth in AI investment in MENA

A guest post by Islam al Khatib.

This is essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

AI is everywhere in the Middle East and North Africa (MENA). Governments are increasingly positioning themselves as leaders in AI, and claiming that AI will transform healthcare, politics, research, sciences, sports, tech, and more. Saudi Arabia recently launched a national AI strategy, while the United Arab Emirates (UAE) seeks to be a major hub for developing AI techniques and legislation. As part of the UAE’s digital innovation hub program for public services, it plans to use blockchain technology for 50 percent of government transactions by 2021.

The main drive behind this investment is the need for governments, specifically gulf regimes, to seek alternative sources for revenue and growth. The development of non-oil sectors through investment in AI technologies could strategically position the region for years to come. As Gillespie (2016) has remarked, AI and algorithmic-power related terminologies appear in the public narrative not only as a noun but also increasingly as an adjective, in relation to issues as wide-ranging as identity, culture, ideology, accountability, governance, imaginary, and regulation.

While much technology investment in MENA is wrapped up in the discourse around ‘AI for good,’ I argue that these actions and narratives can legitimize structures of control that silence dissent from social movements. The efforts to embed AI systems all throughout MENA governments and businesses have exposed political tensions and vulnerabilities, as well as impacted how social movements and digital activism are able to function. AI, datafication, and big data, along with their mythical and ideological underpinnings under the constraints of data capitalism, have been viewed as tools for progress and national development, despite the fact that they, along with the systems that create and sustain them, are complicit in silencing dissent.

For instance, across the MENA region citizens are being arrested for posting on the same platforms that are otherwise promising progress and development. In February 2021, feminist activist Loujain al Hathloul was released after she spent 1001 days in prison for “using the internet to harm public order.” Also in February 2021, journalist Tayssir al-Najjar passed away after developing illnesses during a three-year jail sentence he received for breaching a state law against “information technology crimes.” He had been previously detained and tortured in Emirati prisons for posting on Facebook.

Historically, MENA governments have also used different computational propaganda tactics, including bots and content moderation, to disseminate narratives to manufacture a sense of state legitimacy through technological progress, all while demonizing dissent. This is happening in the background, and is often overlooked and overshadowed by the positive and progress-oriented discourse around technology in the region. It pushes movements to question the nature and goal of this progress: is it to better the lives of people, or to harden the iron grip on freedom of expression?

Rather than focus on the purely technical aspects of AI, computational propaganda mechanisms, and content moderation, or their prospects for economic development, this essay argues that we should focus on the social domain, especially where the visions and demands of citizens and others diverge from the State-sanctioned visions of “development” and “AI for Good.” We should also explore more closely the proposition that AI is above all the foundational component of a deeply intentional and highly consequential new logic of surveillance and how it is used to crack down on dissent. This new form of AI-based capitalism aims to not only predict and modify human behavior as a means to produce revenue and market control, but to also police and discipline activists, particularly, the already demonized bodies of women and feminists.

Many recent incidents sit within a larger contextualized history of instrumentalizing AI, bots, and reporting mechanisms against feminists. For example, in 2020, five Egyptian women were charged with ‘indecency,’ sentenced to two years in prison, and fined US$20,000 because of social media posts, particularly on Tiktok. Prior to that incident, the Public Prosecutor’s office issued statements highlighting what it called ‘the potential dangers threatening the youth via digital platforms.’ In 2018, and upon the arrest of women human rights defenders in Saudi Arabia, the authorities used bots and reporting mechanisms to hijack solidarity campaigns and to smear the arrested feminists as traitors. These arrests came as Egypt is planning for ‘AI’ to contribute 7.7 percent of its GDP by 2030, and Saudi Arabia’s ‘Vision 2030’ positions it to become a global leader in AI by 2030. Clearly, what governments and multinational tech conglomerates want to project as benefits or uses for AI, excludes and erases protections and safeguards for basic human rights including the freedom of expression and the capacity to dissent. Within this state-hijacked vision, only certain kinds of ‘good’ qualify as ‘AI for Good.’

This rise in technological and AI-centric investment and development is further complicated by its tie-in with government surveillance operations, and the secrecy with which technological tools are deployed. In 2016, for example, UAE targeted local activist Ahmed Mansoor with spyware that allows its operator to record phone calls, intercept messages, and track its subject’s movements. Researchers and human rights groups linked this software to Emirate authorities, and later detained Mansoor on suspicion of publishing flawed and false information to ‘incite sectarian strife and hatred’ and ‘harm the reputation of the state.’

While most MENA-related AI discourses focus on how its use will facilitate better communications with activist movements, or the importance of AI in economic development, these neglect the significant role of AI, coordinated by governments, to silence activists and deter human development. In fact, both approaches tend to neglect, or even erase, the significance of the sociopolitical and historical contexts and conditions in which AI is developed in the region. Dissent and the ability to dissent are not antithetical to development and national progress, they are integral to building truly democratic AI futures, if at all.

Technological instrumentalism, functionalism, and determinism are the three spectres that haunt the cyberspace/movement dynamic in the MENA region. Therefore, there is a need to push forward a new discourse that traces the technical, societal, and political infrastructures of AI, and to critically examine the ways in which the ‘AI for good’ discourse legitimizes structures of control that result in further oppression of social movements. There is also a need to push back against presentism (Postill 2012) which is the fetishization of technological novelty, especially in so-called ‘developing countries.’ The tendency to treat the latest technological advancement as a fetish when considering social movements (Mattoni and Treré 2014, p. 255) has been clearly identified by research carried out at various latitudes in the Middle East (Hofheinz 2011; Lim 2018). Radical critical and abolitionist positions on AI deployment are gaining ground in the US and Europe where activists and others have been able to resist the deployment facial recognition and surveillance systems by making a case for protecting minoritized communities. People in the MENA region have for long been experimented upon and punished for dissent through a combination of the same US-developed technologies of warfare but also through the complicity of state governments and enterprises in the region. As we push towards a new liberatory AI discourse that centers the right to dissent, there is scope for translocal solidarity as well.

When we say ‘AI for good,’ it’s important to ask: Good for whom? I invite us to question and explore the ways in which AI structures of control are being developed in the region as we speak, and to find ways to delegitimize discourse that dismisses and erases serious concerns over human rights violations and crackdown on dissent.


References

Bucher, T., 2016. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20 (1), 30–44.

Gillespie, Tarleton, 2016. Content moderation, AI, and the question of scale

Treré, E., 2019, Hybrid Media Activism Ecologies, Imaginaries, Algorithms, Routledge

“Banking on artificial intelligence,” Gulf News, 12 August 2018

“Saudi Arabia embraces AI-driven innovation,” WIPO magazine, September 2018

‘The UAE Spends Big on Israeli Spyware to Listen In on a Dissident’, Foreign Policy, August 2016

Tambini, D., 2016. In the new robopolitics, social media have left newspapers for dead. Guardian, 18 November

Tufecki, Z., 2014. Engineering the public: big data, surveillance and computational politics. First Monday, 7 July.