Illustration by Somnath Bhatt

Hype and Harm in the Design of Technological Systems

A guest post by Laura Forlano. Laura, a Fulbright award-winning and National Science Foundation-funded scholar, is a writer, social scientist, and design researcher. She is an Associate Professor of Design at the Institute of Design at Illinois Institute of Technology. Twitter: @laura4lano

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

“Smart” has become a common short-hand for talking about the promises of automated, digital, and algorithmic systems. But, in German and Old English prior to the 12th century, before it became attributed to human intelligence (and now, digital systems), “smart” referred to ‘causing sharp pain.’¹ One of these definitions applies to the mind, the other to the body — becoming increasingly disjointed over the past four hundred years through the advance of Western, European philosophy and science, and its complicity in the extractive projects of colonialism and capitalism. As a result, today, it is our digital technologies — in particular, AI and ML — that are causing us pain.

The questions of who and what is considered to be smart, for whom, why, and how continue challenge the ways in which socio-technical imaginaries have been framed around techno-utopian claims of perfect futures.² “Smartness” as a metaphor draws on associations with the mind tied to the history of cybernetics, while ignoring the situated, material, embodied experience of knowledge. In order to redefine what it means to be “smart,” I argue that we must reconsider these multiple meanings of “smart” — thereby reintegrating the contemplation of the mind with the visceral experiences of the body — in order to truly understand the implications of the ways in which smartness has been used to market the promises of computation.

By focusing exclusively on the promises of smartness, we place undeserved trust in the claims of technology companies while systematically ignore the messy realities of what it means to live with technological systems. We fail to ask “What could go wrong?,” “Why should we care?,” and “Who might be harmed?”

In short, one might argue that we have become “too smart.”³ Science and technology studies scholar Jathan Sadowski argues that smart technology: 1) “advances the interests of corporate technocratic power”; 2) is totalizing in its extraction of data over “everything and everybody”; and, 3) trades “convenience and connection” for “a wide range of (un)intended and (un)known consequences.”⁴ There are countless examples of this in everyday life from smart cities and social media to in-home assistants and online shopping and, even, wearable technology and medical devices. Smartness has been embedded into technovisions⁵ about the ways in which companies seek to render all of life computable⁶ thereby harvesting data in cities, offices, homes and, even, from our bodies in exchange for applications and services.

With respect to smart cities, media studies scholar Germaine Halegoua writes that “By accumulating and processing digital information about certain urban activities, mobilities, and infrastructures, smart city developers hope to make cities more responsive, efficient, sustainable, and safe.”⁷ Halpern et al. develop the concepts of “the smartness mandate” and the “infrastructural imaginary” based on a 2008 speech by then IBM chair Sam Palmisano, which he gave at the Council on Foreign Relations in New York and noted that his company’s vision of the future rested on “the interweaving of dynamic, emergent computational networks with the goal of producing a more resilient human species — that is, a species able to absorb and survive environmental, economic, and security crises by means of perpetually optimizing and adapting technologies.”⁸ Drawing on Michel Foucault’s notions of governmentality and biopolitics and Gilles Deleuze’s work on “the control society” and immaterial labor, Halpern et al. synthesize their characterization of smartness in the following statements: “1. The territory of smartness is the zone. 2. The (quasi-)agent of smartness is populations. 3. The key operation of smartness is optimization. 4. Smartness produces resilience.”⁹

They argue that smart cities embed the “logic of abstraction” where “civic governance and public taxation will be driven, and perhaps replaced, by automated and ubiquitous data collection.”¹⁰ Smart cities also embed a “logic of software development.” Halpern et al. write:

“Every present state of the smart city is understood as a demo or prototype of a future smart city. Every operation in the smart city is understood in terms of testing and updating…As a consequence, there is never a finished product but rather infinitely replicable yet always preliminary, never-to-be-completed versions of these cities around the globe.”

When will cities be smart enough? Smart cities, with their focus on surveillance and control in tandem with optimization and efficiency, reconfigure urban spaces as techno-utopian labs, platforms and testbeds, rendering citizens as just another “asset” along with data, infrastructure, and technologies.¹¹ Ultimately, such techno-economic regimes dehumanize citizens through aggressive policing and relentless gentrification, taking away the possibility of living safely, affordably and differently and, even, with the use of technologies such as navigation and facial recognition, the beauty of truly “getting lost” on a side street or in a crowd.

But, smartness in its expansiveness does not stop at the edges of urban territories. For example, smartness has also been attributed to in-home assistants such as Amazon Alexa in domestic spaces. In a chapter on “Bitches with Glitches” in their book The Smart Wife, Yolande Strengers and Jenny Kennedy write “ Demure and ditzy, they perpetuate stereotypes of familiarity, and perhaps most important, prevent a more transgressive form of cyborg from coming into being. These kinds of smart wives reinforce a cultural narrative that intends to keep women in their place.”¹² They use the metaphor of divorce to suggest that it is time to break up with these status quo notions of gendered labor that are embedded in current smart systems and instead argue for engaging with queering, feminism and Haraway’s “staying with the trouble” as a mode of imagining alternative relations to technology.¹³

From homes to bodies, the logic of smartness seeps under the skin. As a Type 1 diabetic, I live with a “smart” AI system — a sensor and insulin pump that dynamically communicate and adjust my blood sugar. While this AI system is keeping me alive, it’s also ruining my life. The frequent need to calibrate the machine, often in the middle of the night, means that over the past three and a half years, I’ve only been able to sleep through the night a few times a week. Writing about my experience in a series of articles, I use the term “disabled cyborg” to capture the notion that both the technology and myself are disabled.¹⁴

For disabled people, disability is not a problem, a curse, or a deficit but rather a unique insight on what it means to be human in all of its diversity and complexity. Rather than rushing to medical cures or technofixes, disabled people (and critical disability studies scholars) embrace and argue for ways of living differently, flourishing, and acknowledging the interdependences of lives — human and non-human — as well as the structural conditions that make life possible (for some) as well as impossible (for many).

What might a crip understanding of “smart” mean for the critical AI field — one that more explicitly and generatively engages with the ways in which technologies (like their human creators) are disabled? Such an approach would require that we more seriously consider the question of harm by interrogating the aspects of technologies that we currently refer to as flaws and failures, gaps and glitches, seams and symptoms, errors and omissions, and bugs and biases. These frictions — between the promises of technological systems and the messy realities — are often “known problems,” “unintended consequences,” or “negative externalities” that go unattended because they are deemed too unimportant or unprofitable. Yet, the social consequences of such decisions are acutely devastating for those that are harmed as well as for society more broadly.

Technology companies should stop selling the linear narrative of the future of human progress enabled by smartness, thereby shielding technologies in a veil of promise and perfection. As a wide range of studies in the social sciences and humanities have shown across many contexts including cities, homes, and bodies, the promise of smart technologies are often aspirational at best and abusive, criminalizing, and dehumanizing at their worst. There is nothing smart about that! For every technological system that is deployed, there are many divergent paths along the way, illustrating clearly that the world could be otherwise.

A deep and generative engagement with the ways in which technologies are disabled — intricately wrapped up in human lives — opens up exciting opportunities for the design of these systems. This approach integrates the multiple meanings of the word smart — both as painful as well as intelligent — in order to engage more critically with the politics of design and technology in the future.

For critical AI researchers, a reintegration of the mind and body around the meaning of “smart”, would allow for a deeper engagement with a praxis that transcends the traditional dichotomies between theory and practice.¹⁵ For example, the world “senti-pensar” in Spanish integrates both thinking and feeling to account for embodied knowledge.¹⁶ Such thinking prompts new discussions about what, how, and for whom someplace, something, or someone might be considered “smart” and in what way. For design practitioners, a new understanding of “smart” could shift the discussion from the discovery of “pain points” (immediate problems that need to be solved in order to market and sell a product) to a deeper consideration of the existential harms of the logics of technological systems for the purpose of more liberatory and just futures.


Footnotes

[1] See https://www.merriam-webster.com/dictionary/smart#h1. Accessed on July 16, 2021.

[2] Jasanoff, Sheila, & Kim, Sang-Hyun. (2015). Dreamscapes of Modernity: Sociotechnical Imaginaries and the Fabrication of Power. Chicago, IL: University of Chicago Press.

[3] Sadowski, Jathan. Too Smart (The MIT Press). The MIT Press. Kindle Edition.

[4] Ibid.

[5] Dourish, Paul, & Bell, Genevieve. (2011). Divining a Digital Future: mess and mythology in ubiquitous computing. Cambridge, MA: MIT Press.

[6] Finn, Ed. (2017). What Algorithms Want: Imagination in the Age of Computing. Cambridge, MA: MIT Press.

[7] Halegoua, Germaine. Smart Cities (MIT Press Essential Knowledge series) (p. ix). The MIT Press. Kindle Edition.

[8] Halpern, Orit, Mitchell, Robert, & Geoghegan, Bernard Dionysius. (2017). The Smartness Mandate: Notes toward a Critique. Grey Room(68), 106–129. doi: 10.1162/GREY_a_00221

[9] Ibid.

[10] Ibid.

[11] Bollier, David. (2016). The City as Platform: How Digital Networks Are Changing Urban Life and Governance. Washington, D.C.: The Aspen Institute.

[12] Strengers, Yolande and Kennedy, Jenny. The Smart Wife (p. 151). MIT Press. Kindle Edition.

[13] Haraway, Donna J. (2016). Staying with the trouble: Making kin in the Chthulucene. Durham, NC: Duke University Press.

[14] Forlano, Laura. (2017). Data Rituals in Intimate Infrastructures: Crip Time and the Disabled Cyborg Body as an Epistemic Site of Feminist Science. Catalyst: Feminism, Theory, Technoscience, 3(2), 1–28.

[15] Freire, Paulo. (2018). Pedagogy of the oppressed. New York, NY: Bloomsbury.

[16] Escobar, Arturo. (2018). Designs for the pluriverse: radical interdependence, autonomy, and the making of worlds: Duke University Press.