Illustration by Somnath Bhatt

AI Maintenance as Care, Respect, and Guardianship

A guest post by writers and researchers Anna Pendergrast and Kelly Pendergrast. Kelly and Anna co-founded and work together as Antistatic, a consultancy that brings clarity to complex issues around the environment and technology. Their writing appears in Real Life, The Spinoff, The California Review of Images and Mark Zuckerberg, and elsewhere. Twitter: @APndrgrst | @k_pendergrast

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

To bring an AI system into the world is also to bring about a responsibility for its care. Maintenance is “both absolutely necessary and usually neglected,”¹ write historians of technology Andrew Russell and Lee Vinsel, who co-founded the Maintainers network of maintenance scholars and practitioners. Maintenance is forgettable (until it isn’t), especially compared with the more spectacular and photogenic phases of a product or system’s lifecycle: designing, inventing, building, and even repair. Without maintenance, ropes fray, data decay, and things fall apart.

The consequences of inattention to maintenance can be immediate, but they are also generational.² California’s 2018 Camp Fire, which killed 85 people, was likely caused by a faulty transmission line. Utility company PG&E was subsequently found to have failed to properly maintain and inspect transmission lines for many years — including the tower which sparked the Camp Fire.³ The consequences of neglected maintenance are not neutral. They are often unevenly distributed along lines of privilege and disenfranchisement.⁴ Many of the residents killed and displaced by the Camp Fire were already facing housing insecurity, and some had previously been displaced from other more expensive areas of California.⁵

In the public and political discourse around the ethics of AI, conversations tend to focus on the design, development, and near-term effects of AI, or their correction after catastrophic failures, rather than on their ongoing maintenance or upkeep over time. As it is commonly understood, maintenance is a process that promotes the continuity of physical and digital products, services, or infrastructures in order to ensure they continue to operate as designed. We propose maintenance as an under-researched and under-resourced area of study in AI. To think about AI through the lens of maintenance practices is one way to acknowledge the long life of technological systems and their impacts on people and the environment.

Social and digital infrastructure require maintenance just as keenly as power pylons and leaky roofs. AI systems are complex sociotechnical assemblages comprising data sets, algorithms and models, human labor, and wider institutional structures. They need to be monitored and adjusted to ensure accurate or desirable outputs, code bases need to be kept up to date, bugs need to be fixed, and data sets managed. Without close attention, AI systems can produce damaging outputs, especially those that ‘learn’ from dynamic data sets. Databases are the backbone of many AI technologies, but unless they are updated regularly, they represent only a snapshot of the world at a given time. For example, in order for AI-enabled autonomous vehicles to function smoothly without causing accidents, mapping software needs to be highly accurate and up-to-date, requiring datasets, and the physical roading infrastructure that they represent, to be maintained consistently.⁶ Even for existing mobility platforms that claim to use AI to make real-time decisions about routes and pricing, there is an army of subcontractors who regularly clean and update map data as well as volunteers who carefully collect open map data in the first place. In the context of AI systems, with their complex inputs and iterative development, maintenance overlaps heavily with design, testing, and repair. Still, an approach to AI that centers incremental upkeep and ongoing care would represent a departure from business as usual, and potentially shape how systems are understood and built.

“AI systems are complex sociotechnical assemblages … They need to be monitored and adjusted to ensure accurate or desirable outputs, code bases need to be kept up to date, bugs need to be fixed, and data sets managed.”

Maintenance is not the inverse of innovation and creation: it is the necessary complement. “Maintenance is the key to ensuring that the benefits of technology are felt in their full depth and breadth” write Vinsel and Russell.⁷ To work towards AI accountability, transparency, fairness, or human rights compliance — to ensure technology is beneficial for the many rather than just the few — is to engage with AI maintenance practices. This might include monitoring systems and making small changes over time to ensure equitable outcomes are achieved, or maintaining up-to-date documentation that allows for audits and accountability. As companies, NGOs, and governments work towards operationalizing the AI ethics principles that have proliferated over the past half-decade, acts of maintenance are key components of this proposed work, but rarely described as such. While discussions of “ethics” risk vagueness, abstraction, and impracticality, a maintenance-centric approach to AI might provide a key mechanism through which some aspirations of ethical AI are operationalized — both for the people affected by AI systems, but also those who perform the “hidden” work of AI.⁸

The essential labor of maintaining and caring for AI and its systems is as broad as the systems themselves, and the people that perform these acts are more varied and dispersed than most AI narratives allow for. They include the thousands of low-paid data workers performing digital piecework for a few cents a task,⁹ lauded computer scientists responsible for building and training models, as well as the policy analysts, service designers, and bureaucrats who make decisions about when AI is deployed and why. The majority of AI workers are low paid and often erased from AI narratives. Amazon’s Mechanical Turk program, the engine behind many large data sets and AI projects, relies on freelancers who perform vital digital piecework including data labeling, image classification, rating the toxicity of tweets, recording voice samples, and completing social science surveys.¹⁰ This work to build, clean, and filter data and information is necessary all the way through the life cycle of an AI project, from design to build to maintenance and repair, and many of these systems would fail if maintenance is ignored. Any accounting of an AI system’s impacts or ethics should consider the system’s maintainers, their working conditions, and their agency.

“The essential labor of maintaining and caring for AI and its systems is as broad as the systems themselves, and the people that perform these acts are more varied and dispersed than most AI narratives allow for.”

For guidance on how maintenance can help design, make and care for the world — including the world of AI — we look to existing frameworks from Indigenous, feminist, and other radical traditions, and here we draw specifically on Māori frameworks as an example. In doing so, we recognise the position from which we write: with privilege as Pākehā (European New Zealanders); and with responsibilities as Tangata Tiriti (roughly translated as people who have the right to live in New Zealand as the result of Te Tiriti o Waitangi/The Treaty of Waitangi).¹¹ It is in this context that we were introduced to, and later came to advocate for, Māori data sovereignty principles and broader te ao Māori perspectives. Our work, and our journey to become better Tangata Tiriti, has been greatly informed by Māori scholars and activists we have worked alongside and look up to, and we acknowledge their mahi (work).¹²

The growing field of Māori data sovereignty, and Indigenous data sovereignty more broadly, upends extractive, colonial understandings of data and AI and insists that AI systems not be seen as abstract assemblages of data and mathematical models. “Mainstream discussions of algorithms represent them as somewhat abstract entities, but this representation does not hold from a te Ao Māori point of view”¹³ write researchers at Aotearoa New Zealand’s Te Kotahi Research Institute. Instead, Te Kotahi and others argue that AI should be viewed as living systems inexorably connected to the people whose data are used in the systems, and the physical sources of energy, hardware, and space that ground them. The data used in these systems, especially personal data about people, have mauri (lifeforce) and whakapapa (genealogy), and cannot be severed from the people to whom they relate. This groundedness and relationality invokes the need for ongoing care, respect, and guardianship of data and AI models. “These models then ought to be viewed as both living — requiring kaitiakitanga [guardianship] — and relational — implicated in a network of obligations and relationships that need to be appropriately maintained.”¹⁴

“You don’t leave a carving alone in the rain”¹⁵

The kaitiaki (guardian) of a system must ensure it is protected, stewarded, and maintained so it honors the lives it intersects with, and the materials and history it represents. This applies equally to AI models and precious cultural artifacts.

On its surface, maintenance may seem biased towards the seamless continuity of existing systems — the acceptance of a status quo that enables discrimination and unjust outcomes. Why maintain systems that do harm? Critical maintenance scholars note that “maintenance is not the opposite of change, however, and its primary aim and value is not to uphold stasis.”¹⁶ Maintenance provides space for reflection and reimagining, and an opportunity to intervene in a system even as you ensure its continued function. This might even include the decommissioning of AI systems or data, as the Feminist Data Manifesto-no suggests:¹⁷ “We commit to… preparing bodies or corpuses of data to be laid to rest when they are not being used in service to the people about whom they were created.” Maintainers, with their intimate relationship to machines and systems, are often best positioned to critically assess how things work and offer suggestions for how they could be tweaked to work more efficiently, equitably, or sustainably.

By bringing maintenance into the AI vernacular, we open space to better consider the people affected by and the people who maintain AI systems. We welcome AI maintenance at its most radical, and resist limited ideas of maintenance that only allow for the rote continuity of systems that work for some and oppress others. How this looks can take many forms. From Write the Docs,¹⁸ a global community producing open-source documentation to ensure the ongoing maintainability of systems (and, in doing so, build the connections to maintain themselves), to Te Hiku media’s Māori language tools (built using crowd-sourced labelled datasets and governed by the organisation’s own kaitiakitanga licence to ensure that benefits gained from the language data remain with Māori¹⁹), a broad swath of projects underway provide inspiration and jumping-off points. Our AI maintenance is a critical intervention, an ongoing attention to the experience of workers and the lived experience of those affected by systems, and about offering a space for other worldviews.


[1] Russell, A & Vinsell, L. 2020. The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most.

[2] Mattern, S. 2018. “Maintenance and Care.” Places Journal, November 2018. https://placesjournal.org/article/maintenance-and-care/

[3] Gold, R and Blunt, K. 2019. “PG&E Had Systemic Problems With Power Line Maintenance, California Probe Finds”. The Wall Street Journal, 19 December 2019. https://www.wsj.com/articles/pg-e-had-systemic-problems-with-power-line-maintenance-california-probe-finds-11575338873

[4] The Information Maintainers. Olson, D., Meyerson, J., Parsons, M., Castro, J., Lassere, M., Wright, D., … Acker, A. 2019. Information Maintenance as a Practice of Care. https://doi.org/10.5281/zenodo.3236409

[5] “More Than 1,000 Families Still Searching For Homes 6 Months After The Camp Fire” NPR, May 8, 2019. https://www.npr.org/2019/05/08/721057281/more-than-1-000-families-still-searching-for-homes-6-months-after-the-camp-fire

[6] McKinsey Institute. 2019. A new look at autonomous-vehicle infrastructure.https://www.mckinsey.com/industries/travel-logistics-and-infrastructure/our-insights/a-new-look-at-autonomous-vehicle-infrastructure

[7] Russell, A & Vinsell, L. 2020. The Innovation Delusion: How Our Obsession with the New Has Disrupted the Work That Matters Most.

[8] AI Ethics Impact Group. 2020. From Principles to Practice: An interdisciplinary framework to operationalise AI ethics. https://www.ai-ethics-impact.org/resource/blob/1961130/c6db9894ee73aefa489d6249f5ee2b9f/aieig—report—download-hb-data.pdf

[9] For more discussion and framing of data workers’ essential role in developing and maintaining AI systems, see Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, Lora Aroyo. 2021. “Everyone wants to do the model work, not the data work:” Data Cascades in High-Stakes AI. InCHI Conference on HumanFactors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan.ACM, New York, NY, USA. For additional information on wages from a survey of Amazon Mechanical Turk workers, see also: https://www.nytimes.com/interactive/2019/11/15/nyregion/amazon-mechanical-turk.html

[10] Stanley, S. 2021. “The Workers Perspective.” TWC Newsletter Issue 5: Living in the Hidden Realm of AI. https://news.techworkerscoalition.org/2021/03/09/issue-5/. See also David Martin, Benjamin V Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 224–235.

[11] This blog post by Tina Ngata has been really helpful to us in terms of the kinds of actions that make good Tangata Tiriti: https://tinangata.com/2020/12/20/whats-required-from-tangata-tiriti/

[12] We would like to particularly acknowledge Chris Cormack, Donna Cormack, Amber Craig, Maui Hudson, Tahu Kukutai, Tina Ngata, Karaitiana Taiuru, Ari Thompson, Kiri West and Daniel Wilson, whose mahi (work) has guided us in writing this essay.

[13] Hudson, M., Thompson, A., West, K. & Wilson, D. 2020. Māori perspectives on Trust and Automated Decision-Making. New Zealand: Te Kotahi Research Institute. Page 11. https://digitalcouncil.govt.nz/advice/reports/towards-trustworthy-and-trusted-automated-decision-making-in-aotearoa/

[14] ibid

[15] Participant from expert wānanga on trust and automated decision-making, quoted in Hudson, M., Thompson, A., West, K. & Wilson, D. 2020. Māori perspectives on Trust and Automated Decision-Making. New Zealand: Te Kotahi Research Institute. Page 11. https://digitalcouncil.govt.nz/advice/reports/towards-trustworthy-and-trusted-automated-decision-making-in-aotearoa/

[16] The Information Maintainers. Olson, D., Meyerson, J., Parsons, M., Castro, J., et al. Information Maintenance as a Practice of Care. Page 11.

[17] Cifor, M., Garcia, P., Cowan, T.L., Rault, J., Sutherland, T., Chan, A., Rode, J., Hoffmann, A.L., Salehi, N., Nakamura, L. (2019). Feminist Data Manifest-No. Retrieved from: https://www.manifestno.com/

[18] https://www.writethedocs.org/

[19] Coffey, D. (2021). Māori are trying to save their language from Big Tech. Wired Magazine. https://www.wired.co.uk/article/maori-language-tech