Illustration by Somnath Bhatt

Care as a Tool, Care as a Weapon

A guest post by Hannah Zeavin. Hannah is a Lecturer in the Departments of English and History at the University of California, Berkeley, and the author of the forthcoming book, The Distance Cure: A History of Teletherapy (MIT Press 2021). Twitter: @HZeavin.

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

AI for care is as old as AI itself. Some of the earliest demonstrations of natural language processing were chatbots that simulated therapists and medical providers. For example, Joseph Weizenbaum’s ELIZA experiment (1964–1966, MIT) opened the question of whether or not AI could care for humans. These early AI tools triggered field-wide discussions about whether and to what extent AI could automate care, often with Weizenbaum arguing against performing this work at all. In the intervening fifty or so years since our first attempts to program care, AI and machine learning have become entrenched in domestic and healthcare settings worldwide (whether in the case of digital therapy, diagnostic tools, or even childcare), put forward as alternatives to human-to-human care, or as a stopgap measure where other infrastructures of care are not accessible.

The reality is, of course, more complicated. Whether algorithms in ERs that favor white patients for admittance over Black patients with similar conditions, or therapy apps that can only be used by those who fit the narrow terms of service for care, AI in care extends the reach of inequality. While these applications and interventions are marketed as “tech for good,” and as tools to bring more individuals into the healthcare system, these alternative forms of AI “care” are not generally held to the same research standards as medical care and tend to exclude — and harm — the very populations they claim to help.

As science studies scholars Ruha Benjamin and Danya Glabau have argued, digital health interventions often promise “transcendence,” meaning that individuals, groups, and society could get beyond our own human history and its structural violence to offer everyone access to a needed service. Techno-care companies also assure that those excluded from traditional care will be refolded into systems and experiences of care.¹ The opposite holds. AI/machine learning-based care interventions embed and recodify race and gender, whether in the feminization of robots that act as “surrogate humans” in care scenarios, or in the deployment of algorithms that foster the conditions of medical redlining and the further flourishing of white supremacy in medicine in the US context.² These forms of exclusionary violence are not undone simply by creating and marketing digital solutions; instead, they are reprised, remediated, and amplified where these historical forms of violence meet their contemporary twin, digital redlining. Where this refolding of the excluded does take place, it can be in the service of extraction, datafication, capture, and control.

“Techno-care companies assure that those excluded from traditional care will be refolded into systems and experiences of care. The opposite holds.”

In my book, The Distance Cure: A History of Teletherapy, I argue that algorithms for care are deployed in service of predictive control — the more one uses a platform for care, the more minable data there is at the mercy of its container; deidentification and anonymity are not static states, and consent is far from always informed.³ Automated care is held up as accessible — it allows help not only to reach more patients, but also those that otherwise wouldn’t be able to come to an office (patients who face racial discrimination, are rural, poor, housebound, disabled, or other groups traditionally marginalized by care disciplines). Yet these same users are often the most vulnerable systemically to counting, data collection, prediction, and intervention by state social services, sometimes with lethal consequences when police are called to the site of crisis.⁴

From algorithms that purport to detect suicide risk to those that direct (and withhold) medical care to robots that perform elder care, these digital remediations of decision-making, ministration, interaction, and attention are not somehow magically free from the complexities and challenges of providing adequate medical care. These interventions, I argue, are often deployed under the guise of democratic promise, development, and accessibility, while paradoxically retrenching the very discrimination endemic to human-to-human modes of care and providing new sites through which surveillance can take place.⁵

What counts as an intervention and what modes of relating to self and other these interventions foster are also interlocking problems in the coding and delivering of AI for care. These interventions can work, I argue, mystically, proprietarily, in the background at the level of the hospital, county, city and state, but are also phrased in their marketing to the individual through their device, which can render care indistinct from just another moment of habitual media use. The capaciousness of the term “care” also allows some AI/machine learning interventions to move into the market without traditional vetting, or to promise a “cure” — itself a problem.⁶ An app that purports to take care of the mind may be called a mental health intervention without being designated a therapy, thereby free from oversight from governing bodies. Such scripts appear and become consumable, downloadable, and deployable without conversation, at the whim and mercy of market logics. And, as I show, they just as easily disappear, leaving users without the intervention upon which they may have come to rely.⁷

“The capaciousness of the term “care” also allows some AI/machine learning interventions to move into the market without traditional vetting, or to promise a “cure” — itself a problem.”

Instead of the medical system taking responsibility for patients, patients are now being told by virtual agents to take care of themselves, rescinding the possible collaboration between patients and practitioners. In these scenarios, the very person seeking care becomes ultimately responsible for coordinating their own care, whether that be a gratitude list, a step count, or a course of CBT. This can occur as part of corporate wellness plans offered in part to manage employees, or as insurance-related incentives; they become forms of self-help and self-improvement long attached to ideologies of individualism. I call this form of care a system of “auto-intimacy.”⁸ Without care of another by another, caring for oneself overtly is often phrased through pleasure and game, turning the work of care into play and play into the work of care. This reward-based play, or gamification, is often at the center of AI-based care interventions and ensures compliance, and the return again and again to the screen, platform, or device.

Self-care is too frequently figured not via its radical potential,⁹ but as the ultimate form of capitalistic on-demand access, meeting the person in need not only where and when they are, but as themselves for themselves. In return these systems require care themselves, whether in the form of engagement and payment, or update and maintenance.¹⁰ Put bluntly, this can mean that humans provide care for apps and platforms that claim to have care at their center; we have irreciprocal relationships with our minders under the sign of relational care.¹¹ Putting the responsibility of treatment solely on the person seeking care, whether palliatively or for crisis, is not only a problem of how care is delivered, and by which mechanisms, but of care itself.

Care is always relational, and relational care is — always — uneven. Good help is hard to find. Remediations of care remediate the problems of exclusion, too, at scale. Radical care, as Hi’ilei Hobart and Tamara Kneese argue, invites new affective and relational ways of caring mutually, of being for one another that “push back against structural disadvantage.”¹² Kim TallBear argues that these relations need to be explicitly networked, and not hierarchical, proposing “caretaking relations — both human and other-than-human.”¹³ Designing specifically for the utopian, radical collective and against supremacist ideologies is paramount lest we reencode the devastating problems of care infrastructures and their failures, repressing the central fact that an algorithm will not (and cannot) be there as true aid in the moments where we need care the most.¹⁴ We cannot hope to enfold those who care forgot while repressing this central fact: care is a tool, but it is also, too often, a weapon.


[1] Danya Glabau, “The Dark Matter of Digital Health,” April 14, 2020, Public Books; Ruha Benjamin, Race After Technology, Polity Books, 2019.

[2] Atanasoski, N., & Vora, K. (2015). Surrogate humanity: Posthuman networks and the (racialized) obsolescence of labor. Catalyst: Feminism, Theory, Technoscience, 1(1); Safiya Umoja Noble, “Robots, Race, and Gender,” Fotomuseum.com (January 30, 2018), https://www.fotomuseum.ch/en/explore/stillsearching/articles/154485_robots_race_and_gender.

[3] Hannah Zeavin, The Distance Cure: A History of Teletherapy (Cambridge: MIT Press, 2021).

[4] Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, (New York: Crown Publishing, 2016); Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. (New York: Picador, 2019).

[5] Zeavin, The Distance Cure: A History of Teletherapy (Cambridge: MIT Press, 2021). See also Zeavin, “The Third Choice: Suicide Hotlines, Psychiatry, and the Police,” Somatosphere, November 2, 2020.

[6] For an expansion of this idea, see Zeavin, Hannah, “No Cure,” Public Books, June 2, 2021.

[7] Zeavin, The Distance Cure: A History of Teletherapy (Cambridge: MIT Press, 2021).

[8] Zeavin, The Distance Cure: A History of Teletherapy (Cambridge: MIT Press, 2021).

[9] Audre Lorde, Burst of Light. Ithaca, NY: Firebrand Books, 1988; Michaeli, Inna. “Self- Care: An Act of Political Warfare or a Neoliberal Trap?” Development, 60, nos. 1–2 (2017): 50–56.

[10] See Wendy Hui Kyong Chun, Updating to Remain the Same: Habitual New Media (Cambridge: MIT Press, 2017); Mattern, Shannon. “Maintenance and Care.” Places, November 2018. Placesjournal .org/article/maintenance- and- care/; Forlano, Laura. “Maintaining, Repairing, and Caring for the Multiple Subject,” Continent 6, no. 1 (2017): 30–35.

[11] See Sherry Turkle, most recently The Empathy Diaries (New York: Penguin Press, 2021).

[12] Hi’ilei Hobart and Tamara Kneese, Radical Care, Special Issue, Social Text, 142, Vol. 38, №1 March 2020, 8.

[13] Kim TallBear, “Caretaking Relations, Not American Dreaming,” Kalfou, Volume 6, Issue 1 (Spring 2019), 25.

[14] Shaowen Bardzell, “Utopias of Participation: Feminism, Design, and the Futures.” ACM Trans. Comput.-Hum. Interact. 25, 1, Article 6 (February 2018). DOI: https://doi.org/10.1145/3127359.