Illustration by Somnath Bhatt

‘AI for Social Good,’ is everywhere, but who is it good for?

A guest post by Doaa Abu-Elyounes and Karine Gentelet. Doaa is an Affiliated Researcher at the Berkman Klein Center for Internet and Society at Harvard University, a Postdoctoral Researcher with the Chair on AI and Justice at ENS in Paris and working at UNESCO on AI ethics. Karine is an Associate Professor at Université du Québec en Outaouais (Canada) and Holder of the Chair Abeona-ENS-OBVIA on AI and Social Justice. Thanks to Melanie DaCosta for her help with the research.

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.


A recent report by the Global Partnership on AI (GPAI) identified 214 “responsible AI” initiatives aimed towards promoting “AI and Ethics,” “AI and Governance,” and “AI and Social Good.” These AI for ‘good’ initiatives are led by private companies focusing on algorithmic design (Google AI, 2021; Microsoft AI, 2021), intergovernmental initiatives (GPAI, 2020), civil society (AI for Good Foundation, 2021); and academia (NeurIPS, 2019). In general, these initiatives aim to highlight the positive social impact that AI and algorithms are argued to promote. Despite the wide range of such initiatives, definitions of what it means for AI to be ethical, responsible, or good for society are hard to come by, and the associated literature has no one accepted definition (Lillywhite and Wolbring 2020).

Most initiatives highlighted in the report treat the term AI for social good as a whole, even though AI technologies have very different and potentially negative impacts on different groups of society. Technology is packaged as a social good project without any clear frame on what the term ‘social’ refers to and to what kind of social contexts and groups it would apply to. This essay highlights the lack of substance inherent to the many definitions and initiatives surrounding ‘AI for good’ and argues for a more nuanced approach that examine how each AI system impacts disadvantaged and marginalized groups. In order to fulfil the “good” in AI for social good, we need to shift the narrative from rather general discussions and objectives to more nuanced and perhaps quantitative measures that assess the real-world impact of AI on underserved communities.

AI for Social Good: Definitions

While different terminology is used when referring to the broader concept of AI for social good (responsible AI, good AI society, ethical AI, human centric AI etc.) — terms that are sometimes used interchangeably — it is not clear that these terminologies hold different meaning (Cath et al., 2018; Tomas et al., 2020; Barredo Arrieta et al., 2020; Wamba et al., 2021). Very few papers offer a concise definition of the term, and our analysis of the literature reveals that AI for social good definitions have three main characteristics.

First, they are often broad, aspirational, and hollow. For example, Garlington, Colins, and Bossaller (2019) understand social good as achieving “human well-being on a large scale.” See also the definition provided by Floridi et al., (2018), at the core of an AI for social good project is the promotion of principles of human dignity and flourishing. In an attempt to unpack the definition, the authors outline four types of benefits that AI technologies offer and the corresponding risks that should be taken into account: (1) enabling human self-realization without devaluing human abilities; (2) enhancing human agency, without removing human responsibility; (3) increasing societal capabilities without reducing human control; and (4) cultivating social cohesion without eroding human self-determination (Floridi et al., 2018).

Second, they tend to focus on neglected societal needs. For example, The White House Office of Science and Technology Policy’s 2016 report, “Preparing for the Future of Artificial Intelligence,” emphasizes the potential “to improve people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies” (p. 13). In addition, the Association for the Advancement of Artificial Intelligence’s 2017 symposium on AI for Social Good focused on “addressing social challenges which have not yet received significant attention by the AI community or by the constellation of AI sub communities” (AAAI, 2017). Finally, also according to Berendt, (2019), AI for the Social Good can be framed as initiatives avoiding “activities […] that do harm to people or the environment” (Berendt, p. 44).

Third, the definitions often differentiate between economic impact and the societal impact and advantages. See for example the definition is offered by the Computing Community Consortium in its 2017 ‘Artificial Intelligence for Social Good’ workshop: “[social good] is intended to focus AI research on areas of endeavor that are to benefit a broad population in a way that may not have direct economic impact or return, but which will enhance the quality of life of a population of individuals through education, safety, health, living environment, and so forth” (Hager, et al., 2017). Furthermore, according to Berendt (2019), the intended beneficiaries of AI for Good, for Social Good, etc. will generally not be the ones who directly pay for the development or use of this AI. As such, the risks that the needs of disadvantaged groups and those at the margins will not be addressed are high. We argue that AI for social good projects should not be seen as solely socially oriented, but as driven by larger economic or political forces.

Addressing the Needs of Disadvantaged Groups in AI for Social Good Projects

Despite the variety of definitions for AI for social good, in general, these definitions tend to ignore the question how AI for social good initiatives impact disadvantaged groups (Floridi et al. 2020). Most initiatives lack concrete design specifications that can address the needs and characteristics of minorities and marginalized populations. In fact, Wamba et al. (2021) found that equality and inclusion involving reducing biases based on gender or race is one of the least researched areas within the topic of AI and social good impact. Furthermore, the researchers did not find papers within this topic on AI and marginalized communities (Wamba et al., p. 18). Similarly, organizations publishing AI principles “to declare that they care about avoiding unintended negative consequences,” is growing in popularity, but ensuring these organizations actually implement these principles in practice is difficult (Arrieta, 2020, p. 107).

Example: The Social Good Aspect of Automating Welfare Services

One core component of the AI for Social Good universe is the rise in digital welfare societies,” a term coined by Philip Alston, the UN Special Rapporteur on Extreme Poverty and Human Rights in a 2019 report with the same title. In the digital welfare society, countries across the world are implementing algorithmic systems that promise to empower citizens, transform the way they interact with the government, and improve governmental services. The technology is always presented to both policymakers and citizens as ineluctable, noble, innovative, and beneficial for all, hence it can certainly fall in the category of a social good project. It assumes then a common understanding first on the definition of the notion of “social good” and then on its impact, pictured as progressive, uniform, and undifferentiated on social contexts and populations.

Alston’s report tracked the expansion of digital welfare projects in high-, middle-, and low-income countries, and identified six objectives for which technology is used in the welfare state: (1) identity verification; (2) eligibility assessment; (3) welfare benefits calculation and payments; (4) fraud prevention and detection; (5) risk scoring and need classification; and (6) communication between welfare authorities and beneficiaries (Alston, 2019).

Nevertheless, in practice these technologies increase the surveillance and control of segments of the society that have no option but to consent to have access to essential welfare services. Technology is packaged as a social good project, but in practice it has a very different consequences on disadvantaged, marginalized groups, and minorities (Alston, 2019). These changes have also arrived along with reductions in welfare services budgets, and increased profits for private companies (Alston, 2019).

Welfare ‘modernization’ and automation also distance beneficiaries from case workers and public officers who would normally conduct personal interviews and analyze applicants’ circumstances. Automatic systems can easily suspend benefits and leave individuals to struggle against the bureaucracy if they wish to contest the decision and get their benefits back (Henriques-Gomes, 2019). Moreover, in many cases there is a pressure from the government to deploy technologies rapidly in order to deliver noticeable results or ‘fix’ certain social issues. These rushed deployments significantly and often dramatically harm marginalized communities. For instance, in India, a man died from starvation because of a technical problem with the identity card that qualified him for food stamps (Ratcliffe, 2019). Through these systems, we have created a world where only those who have the resources, time, and energy to reverse false decisions, or can understand the newly adopted systems, are able to fulfil their rights.

Critics of AI for social good

We are not the first to critique these initiatives, and the broad definitions of ethical AI or AI for social good have led researchers to criticize the terminology on what we see as four main grounds:

  1. AI cannot be acted upon (meaning that the terminology used to describe the principles that AI should comply with is very general and vague in order to accommodate the interests of different stakeholders, and it does not entail actionable items or reinforcement mechanisms);
  2. The ethical frameworks are merely marketing tactics, (meaning that they are mainly utilized to increase trustworthiness in the products and improve reputation);
  3. Ethical frameworks have the potential to spur unethical activities while maintaining the spirit of ethical behavior; and
  4. They are used to replace more binding measurements including legislations and governmental regulations (Mittelstadt, 2019, p. 5–9; Hagendorff, 2020, p. 113; Rességuier and Rodrigues, 2020, p. 2; Greene et al., 2019, p. 2126).

In addition to these general criticisms, Floridi (2019) has outlined five ethical risks that might occur when translating general principles into practices, in particular to disadvantaged and marginalized groups.

Ethics Shopping: given the high number of principles for ethical AI, firms can choose the set of principles that contain vocabulary that justify their current behaviors, rather than revise their behaviors to make them consistent with a socially accepted ethical framework (Floridi, 2019; Floridi and Cowls, 2021).

Ethics Washing: Activities and proposals that aim to appear ethical, but in fact they are motivated by political or commercial reasons (Stix, 2021, p. 4, Bietti, 2019, p. 1). More concretely, in the context of AI, “ethics washing” manifests itself as public relations campaigns or activities, such as advisory groups that do not have sufficient power to take concrete steps or that they are not vocal enough (Floridi, 2019, p. 187).

Ethics Lobbying: The enormous efforts made by private firms to “convince” that ethical frameworks are the right mechanism to govern this domain. The goal is to water down and weaken legal norms in order to set the ground for limited compliance (Floridi, 2019, p. 188).

Ethics Dumping: This phrase was introduced by the European Commission in 2013, in order to characterize the practice of exporting unethical research practices to countries with weaker legal and ethical frameworks for supervising such activities, typically from high-income countries to low-income countries (Nordling, 2018). For example, Cambridge Analytica developed and used algorithmic tools for elections in Nigeria in 2015 and Kenya in 2017 due to the weaker data protection laws there, and then later deployed those tools for elections in the U.S. and the UK (Mohamed, Therese, and Isaac, 2020).

Ethics Shirking: Defined by Floridi (2019, p. 191) as “the malpractice of doing increasingly less “ethical work” (such as fulfilling duties, respecting rights, and honoring commitments) in a given context the lower the return of such ethical work in that context is mistakenly perceived to be.” Ethics shirking can be seen actions that not applied equally toward agents or their activities, or applied in situations where inequality and power imbalance prevail. For example, with the rise of the gig economy, while the rich and the affluent are able to enjoy the upside of this form of work including the flexible timeline and the entrepreneurial spirit, disadvantaged groups engaged with those platforms are mainly impacted from the lack of stability and work protection (Floridi, 2018).

AI for social good projects tend to be represented as voluntary, purely altruistic, evenly progressive — Yet this further contributes to the stigmatization and marginalization of disadvantaged groups. Social good can’t be defined as a linear end. In the same way, avoiding harm and increasing social benefits for all cannot be done in the same way as if the process was universal (Gibney, 2020).

Ethics dumping and shirking in particular are very relevant to the central claim in this paper as they directly convey the importance of addressing the needs and human rights of disadvantaged and marginalized groups or minorities. In terms of AI, these practices often take place in the training and testing phases or algorithmic development in countries with weaker data protection laws or on populations who have historically faced discrimination. In the context of algorithms deployed in the welfare system, albeit characterized as AI for social good, the algorithms disproportionately harm more disadvantaged and marginalized groups. We are now adopting systems where the behavior of groups who have already been socially under pressure is constantly scrutinized, and only those with significant resources or bandwidth are able to truly fulfill their rights.

Conclusion

While the aspiration to achieve societal, economic, and environmental goals is noble, a close examination of the definitions of AI for social good reveals that the human rights of disadvantaged and marginalized groups are not properly taken into account. On the contrary, some of the ‘AI for good’ definitions refer to initiatives aiming to tackle unsolved societal issues, or as projects that aim to benefit a certain group without proper consideration of the economic impact. These aspects might create the impression that disadvantaged and marginalized groups are a burden on society and need to justify the services they are rightly entitled to. They also glorify AI for social good projects as aiming to tackle longstanding social inequalities and contribute to advocate a techno-solutionism approach to social complex issues. In addition, the automation of welfare services is an illustrative example of how the specifics of implementation are often misaligned with the overall spirit of social good. While it is relatively easy to characterize any digital welfare project as a social good project, in practice the majority of them have negative externalities mainly on minority groups and the vulnerable.

Without specifying the necessity to examine how AI systems impact disadvantaged and marginalized groups, affect social dynamics, and jeopardize societal balance, the fear is that this terminology will be remain vacant from any effective and relevant content or action. Social good projects should be developed at a small scale for local contexts — they should be designed in consultation with the community or social environment impacted by the systems in order to identify core values and needs. Their design and deployment should be supported by human rights compliance, as well as local accountability and decision-making processes. Before flagging everything as a social good project, it is important to examine it from a critical lens and assess it based on an agreed upon methodology — what would its impact be on underserved communities?


Bibliography

AAAI, (2017) AI for the Social Good, Spring Symposia, https://aaai.org/Symposia/Spring/sss17symposia.php#ss01

Abu-Elyounes, D. (2021) ““Computer says no!”: the impact of automation on the discretionary power of public officers”, Vanderbilt Journal of Entertainment and Technology Law, forthcoming.

AI for Good Foundation (2021) “About us” https://ai4good.org/about-us/

Alston, P. (2019) “The digital welfare dystopia”, A Report by the UN Special Rapporteur on Extreme Poverty and Human Rights; https://www.ohchr.org/EN/NewsEvents/Pages/DisplayNews.aspx?NewsID=25156

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Berendt, B. (2019). AI for the Common Good?! Pitfalls, challenges, and ethics pen-testing. Paladyn, Journal of Behavioral Robotics, 10(1), 44–65. https://doi.org/10.1515/pjbr-2019-0004

Bietti, Elettra, From Ethics Washing to Ethics Bashing: A View on Tech Ethics from Within Moral Philosophy (December 1, 2019). DRAFT — Final Paper Published in the Proceedings to ACM FAT* Conference (FAT* 2020), Available at SSRN: https://ssrn.com/abstract=3513182

Braun, I., (2018) “High-risk citizens”, AlgorithmWatch, https://algorithmwatch.org/en/high-risk-citizens/

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2017). Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach. Science and Engineering Ethics, 24, 505–528. https://doi.org/10.1007/s11948-017-9901-7

Eubanks, V., (2018) “Automating inequality: how high-tech tools profile, police, and punish the poor” St. Martin’s Press.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … Vayena, E. (2018). Ai4people — an ethical framework for a good ai society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5

Floridi, L. (2019). Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Philosophy & Technology, 32(2), 185–193. https://doi.org/10.1007/s13347-019-00354-x

Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5

Fosso Wamba, S., Bawack, R. E., Guthrie, C., Queiroz, M. M., & Carillo, K. D. (2021). Are we preparing for a good AI society? A bibliometric review and research agenda. Technological Forecasting and Social Change, 164, 1–27. https://doi.org/10.1016/j.techfore.2020.120482

Garlington, S.B.; Collins, M.E.; Bossaller, M.R.D. An Ethical Foundation for Social Good: Virtue Theory and Solidarity. Res. Soc. Work. Pr. 2019, 30196–204

Gibney, E. (2020) “The battle for ethical AI at the world’s biggest machine learning conference”, Nature, volume 577 p. 609.

Google AI, (2021) “AI for social good”, Google; https://ai.google/social-good/

GPAI (2020) “Areas for future action in the responsible AI ecosystem”, a report of the Working Group on Responsible AI presented in the GPAI’s first plenary on Dec. 2020; https://gpai.ai/projects/responsible-ai/

Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. Proceedings of the 52nd Hawaii International Conference on System Sciences, 2122–2131. https://doi.org/10.24251/hicss.2019.258

Hagendorff, T. (2020). The ethics of AI Ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8

Hager, G.D., Drobness, A., Fang F., Ghani R., Greenwald, A., Lyons, T., Parkes D.C., Schultz, J., Saria, S., Smith, S.F., et al. (2017) “Artificial Intelligence for Social Good Workshop Report”, https://cra.org/ccc/wp-content/uploads/sites/2/2016/04/AI-for-Social-Good-Workshop-Report.pdf

Henrique-Gomes, L. (2019) “The automated system leaving welfare recipients cut off with nowhere to turn”, The Guardian; https://www.theguardian.com/technology/2019/oct/16/automated-messages-welfare-australia-system

Kasper, D. (2007). Privacy as a Social Good. Social Thought & Research, 28, 165–189. Retrieved March 25, 2021, from http://www.jstor.org/stable/23252125

Lillywhite, A., & Wolbring, G. (2020). Coverage of Artificial Intelligence and Machine Learning within Academic Literature, Canadian Newspapers, and Twitter Tweets: The Case of Disabled People. Societies, 10(1), 1–27. https://doi.org/10.3390/soc10010023

Microsoft AI, (2021) “Using AI for good with Microsoft AI”; https://www.microsoft.com/en-us/ai/ai-for-good

Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507. https://doi.org/10.1038/s42256-019-0114-4

Mohamed, S., Therese, M., and Isaac, W., (2020) “Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence”, Philosophy and Technology, volume 33, 659.

Mor Barak, M. E. (2018). The practice and science of social GOOD: Emerging paths to positive social impact. Research on Social Work Practice, 30(2), 139–150. https://doi.org/10.1177/1049731517745600

NeurIPS, (2019) “Joint workshop on AI for social good”, NeurIPS 2019; https://aiforsocialgood.github.io/neurips2019/

Nordling, L. (2018) “Europe’s biggest research fund cracks down on “ethics dumping”, Nature News https://www.nature.com/articles/d41586-018-05616-w

Ratcliffe, R. (2019) “How a glitch in India’s biometric welfare system can be lethal”, The Guardian;https://www.theguardian.com/technology/2019/oct/16/glitch-india-biometric-welfare-system-starvation

Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data & Society, 7(2), 205395172094254. https://doi.org/10.1177/2053951720942541

Stix, C. (2021). Actionable Principles for Artificial Intelligence Policy: Three Pathways. Science and Engineering Ethics, 27(1). https://doi.org/10.1007/s11948-020-00277-3

The Whitehouse, (2016) “Preparing for the Future of Artificial Intelligence”, Executive Office of the President, National Science and Technology Council, https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf

Tomašev, N., Cornebise, J., Hutter, F., Mohamed, S., Picciariello, A., Connelly, B., … Clopath, C. (2020). AI for social good: unlocking the opportunity for positive impact. Nature Communications, 11(1), 1–6. https://doi.org/10.1038/s41467-020-15871-z

Viswanathan, M., Seth, A., Gau, R., & Chaturvedi, A. (2009). Ingraining Product-Relevant Social Good into Business Processes in Subsistence Marketplaces: The Sustainable Market Orientation. Journal of Macromarketing, 29(4), 406–425. https://doi.org/10.1177/0276146709345620