Illustration by Somnath Bhatt

Is empathy the wrong goal for computational models?

A guest post by Hannah Zeavin. Hannah is a Lecturer in the Departments of English and History at the University of California, Berkeley, and the author of, The Distance Cure: A History of Teletherapy (MIT Press 2021). Twitter: @HZeavin.

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.


Debates around empathy in AI are everywhere: should we feel for our machines? Do they feel for us? How can we build, code, and design machines towards the empathetic? But whether or not empathy is the goal is rarely up for discussion.

The term empathy was developed in a lab and has a strangely short etymological history, only coined in English in the early 20th century by social scientists at Cornell University and the University of Cambridge. Born out of this Anglo-American academic collaboration, the dominant western concept of empathy is a translation of the German word Einfühlung, or, “feeling-in,” and the ancient Greek empathos, “in feeling.”¹ Initially, as historian of science Susan Lanzoni writes, this new term was used for how humans can project their feelings onto the world, especially in literature and art. The work of empathy was to describe the occupation of objects by subjects, not, as is held, the work of one subject understanding another. Now empathy had settled into a more stable definition: empathy proposes that one can feel what is not proper to oneself but to another, as one’s own.²

In the contemporary world of AI and machine learning, empathy is too frequently treated as a blanket solution for what ails technology. The logic goes something like, if we can understand the user’s experience more deeply, we can code a semblance of good relations. This drives development in robotics, virtual agents, and chatbots to replace humans for care, companionship, and other forms of automated (and frequently feminized labor). While Ovetta Sampson of Microsoft Research writes that many design conversations “start with empathy,” she argues that empathy is too frequently misdefined and therefore misdeveloped in technologies, resulting in the ubiquitous spread of “false empathy.”³

Sampson is far from alone in arguing against a false empathy in AI — and for a deeper engagement with it as a form of relation and experience. Sherry Turkle of the Massachusetts Institute of Technology goes one step further, arguing that all mechanical empathy is false. For three decades, she been one of the most prominent proponents of empathy as a salve for what ails human culture in the age of ubiquitous computing. She argues for human empathy — that is, a deeper understanding of humans by other humans — and against this false mechanical empathy.⁴

For Turkle, empathy is humanity’s “killer app.” She argues that our machines are coded to perform tasks that make them appear empathetic but have also been designed specifically so that humans care for them. Programs seem to understand us, and elicit human empathy in return. Yet Turkle vehemently argues that humans should not care for machines — they should care, empathically, for one another. Designing towards empathy, she argues, is a false cure for the contemporary computing landscape and its users, and may in fact makes matters worse.

While their approaches differ, Sampson and Turkle have each taken up the call to empathetic arms first issued at mid-century when early work on AI began. They are two of the most prominent researchers to argue that what separates machines from humans is, to some degree, the human capacity for empathy, and the machine’s lack of it. Much of their research has focused on the notion that if machines are to be intelligent, it does not follow that they would have the emotional intelligence of the human.

Joseph Weizenbaum of MIT, one of the earliest Natural Language Processing computer scientists, put it another way: “I had thought it essential, as a prerequisite to the very possibility one person might help another learn to cope with his emotional problems, that the helper himself participate in the other’s experience of those problems and, in large part by way of his own empathic recognition of them, himself come to understand them”⁵ (emphasis added). It was precisely a mechanic absence of engagement that worried him. Weizenbaum delimits the possibilities of AI along the lines of empathy: it was, in his moment and ours, impossible to code.

Where Sampson argues against false empathy by turning to a deep engagement with the world, and Turkle argues against its use in AI/Machine Learning by delimiting it to the human, I argue that turning to empathy at all may present issues. Empathy does not necessarily instruct good relations; it can in fact produce the opposite. And yet everywhere you look in contemporary AI, especially at the center of AI for Good and Design Thinking programs, debates around empathy — whether it is a human-only experience, what it might mean definitionally, and so on — almost always assumes that it’s a moral good. As a concept and moral aim, empathy has been deployed as a rationale for investment in the arts and the humanities and in parallel, been thought of as the backbone of worthwhile technological innovation. When empathy is found lacking it is understood to be a grave loss. And so empathy is now everywhere: in entertainment technologies, call-centers, robotics, hiring algorithms, and facial emotion recognition used in myriad contexts.

Fritz Breithaupt, a theorist and historian, complicates the status of empathy as moral, arguing that empathy can be what precedes compassion⁶ but is just as likely to license sadism. Knowing that we might cause pain doesn’t mean that we stop causing it — it might make us cause it more effectively, more intensely. That empathy is inherently good is a misdirection: imagining that one feels what the other is feeling to better understand them can become a site of violence. This specter has long been attached to speculative futures in which we depend on AI and virtual agents, but it is already present in human-to-human relations. Empathy, which is held up as an eternal ethical value, one we impart to children, one we may chastise ourselves and others for failing to manifest, and one that is central to computational AI, especially in its applications for robotics, must then be interrogated. Why is it that we’re so attached to terming good human relations and other regard via this specific form of relating. Put another way, if it turns out that empathy is not a moral good, but far from it, why is it at the center of humanistic AI?

Empathy can be implied in small scales of violence between individuals; as the artist Chloe Bass put it recently: empathy is a hierarchical feeling, allowing us to map another as outside us and lower than ourselves. It is also complicit in structures of white supremacy and racism, misogyny, classism, homophobia, transphobia, and ableism: what can be called empathy can quickly turn into what bell hooks calls “eating the other,” consuming and cosplaying the pain (and typically the pain — not the joy, although that, too) of others. Empathy, I argue, only exists on a spectrum of failure. Or when it succeeds, its success is defined by a failure; as Saidiya Hartman says, empathy “fails to expand the space of the other but merely places the self in its stead.”⁷

This empathy as consumption and play of the other for pleasure is at the core of a virtual reality (VR), a major recent example of AI parading as empathy. “Big tech” has argued that VR can teach us to be more empathetic — and puts the users in the literal virtual shoes of another. It’s fully encompassing. They call it the digital novel. However, Lisa Nakamura has recently critiqued the notion of empathetic design in technology as a red herring that conceals both the drive to clean up the public’s understanding of new media, and as highly problematic. Nakamura writes that VR is a “technology of empathy…that connects people across difference [and] is part and parcel of Big Tech’s attempt to rebrand VR as a curative for the digital industries’ recently scrutinized contributions to exacerbating class inequality, violating users’ privacy, and amplifying far-right fascist racism and sexism.” Nakamura cautions that this is, on the one hand a “a nostalgic callback to the idealized past of the internet’ and nothing more than a PR stunt by the “newly chastised digital media industry” and, on the other, an experience for the user in which they are “immersed in virtue as well as pleasurable pain.”⁸

The critic and novelist Namwali Serpell echoes this critique of empathy on the grounds of art, empathy’s original domain: “The empathy model…is a gateway drug to white saviorism, with its familiar blend of propaganda, pornography, and paternalism. It’s an emotional palliative that distracts us from real inequities, on the page and on screen, to say nothing of our actual lives. And it has imposed upon readers and viewers the idea that they can and ought to use art to inhabit others, especially the marginalized.”⁹ Applying Serpell’s critique of empathy in art to AI/machine learning, even if tech companies were to accomplish their goal of programming a fully empathetic machine, it would likely only perpetuate white or male saviorism.

Empathy is then perhaps the wrong goal for computational models.¹⁰ As Leo Bersani writes, “No recognizably political solution can be durable without something approaching a mutation in our most intimate relational system.”¹¹ Empathy is not that mutation — it is the norm or, as the axiom goes, it is a feature, not a bug. We must then turn our attention to other modes of interaction, those forged between humans beyond the lab.


References

[1] Susan Lanzoni, Empathy: A History. (New Haven: Yale University Press, 2018).

[2] C. Daniel Batson identifies eight uses of the term: “knowing another’s thoughts and feelings; imagining another’s thoughts and feelings; adopting the posture of another; actually feeling as another does; imagining how one would feel or think in another’s place; feeling distress at another’s suffering; feeling for another’s suffering, sometimes called pity or compassion; and projecting oneself into another’s situation.” (2009). These things called empathy: Eight related but distinct phenomena. In J. Decety & W. Ickes (Eds.), Social neuroscience. The social neuroscience of empathy, (Cambridge: MIT Press), 3.

[3] https://medium.com/swlh/stop-bastardizing-design-with-false-empathy-6a06d431bab3

[4] See Sherry Turkle, The Second Self: Computers and the Human Spirit (Cambridge, MA: MIT Press, 1985); Life on the Screen (New York: Simon and Schuster, 1995); The Empathy Diaries (New York: Penguin Press, 2021).

[5] Joseph Weizenbaum, Computer Power and Human Reason (New York: W. H. Freeman, 1976), 5–6.

[6] This is Paul Bloom’s famous argument about empathy — that it leads to compassion which slows rational thinking; we might be so focused on the individual via the empathetic that thereby we forget the collective. Bloom, Against Empathy: The Case for Relational Compassion, (New York: Ecco, 2016).

[7] Saidiya Hartman, Scenes of Subjection: Terror, Slavery, and Self-Making in Nineteenth-Century America (Oxford: Oxford University Press, 1997), 20.

[8] Lisa Nakamura, “Feeling Bad About Feeling Good: Virtuous Virtual Reality and the Automation of Racial Empathy.” journal of visual culture, ​​Vol 19(1): 52–53.

[9] Namwali Serpell, “The Banality of Empathy,” New York Review of Books, March 2, 2019.

[10] For more on empathy, its history and application to AI/algorithmic care, see Hannah Zeavin, The Distance Cure: A History of Teletherapy (Cambridge: MIT Press, 2021), especially the coda.

[11] Leo Bersani and Adam Phillips, Intimacies (Chicago: University of Chicago Press, 2008), 66–67.