Illustration by Somnath Bhatt

In search of a cure, we lose out on care

A guest by Xiaowei Wang. Xiaowei is the Creative Director at Logic Magazine, and recently published a book, Blockchain Chicken Farm. Their work centers community empowerment, technology and ecology. Twitter: @xrw

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

AI systems are more than the algorithms themselves. Behind these systems are a host of political and economic forces, as well as a complex mosaic of hidden labor. This essay explores the logic of care and the logic of cure that underlie the creation of AI and machine learning systems. In particular, I look at Software as A Medical Device (SaMD) — the use of AI within the healthcare industry, to understand entry points into intervention and advocacy on AI systems. The tensions between cure and care in the care industry are a reminder that tackling the inequity proliferated by AI systems is not just about changing Silicon Valley, but rethinking care and shifting broader, situated circuits of power across different scales and geographies. I argue that the AI and the software medical industry’s promises of cure replicate a technological solutionism, resulting in an inability to truly care for people.

Software in medicine has gained greater traction in the US, particularly the use of AI and machine learning as part of diagnosis, propelled by the 2016 21st Century Cures Act and new policy guidance by the US Food and Drug Administration (FDA). The FDA has a reputation amongst policy makers for regulatory focus on the “what” of medicine (e.g., devices or drugs), rather than the “how” (e.g., the practice of medicine itself). A new class of software that the FDA has designated as “Software as a medical device” (SaMD) has emerged. The FDA created this category based on their risk-based approach to market approval. Software used in devices like insulin pumps and pacemakers are considered “Software in a medical device,” (SiMD). The risks of SiMD lie within the device itself (e.g., the failure of a pacemaker), while the risks of a SaMD app are the software’s outputs (e.g., the recommendation to treat, or a risk score), and what they mean for patient health.

SaMD is broadly divided into two categories. The first is digital therapeutics that take health data from patients, using machine learning to give responsive, predictive outputs for patient care such as risk scores or behavioral changes — for example, Somryst and reSET-O by Pear Therapeutics that treat insomnia and opioid addiction, Omada Health’s diabetes and heart disease prevention apps, or Akili’s EndeavorRx prescription childhood ADHD treatment delivered via a digital game. The second branch of SaMD include products that combine medical imaging diagnostics and deep learning — like Viz.ai’s ContaCT, a FDA approved algorithm for stroke diagnosis via CT scans, and Arterys Cardio which uses deep learning and medical imaging to improve heart abnormality diagnosis.

Embedded in the FDA’s risk-based approach to regulating SaMD is the assumption of cure — that the cure might be so great that it offsets risks. The logic of cure has a deep-seated finality — we find solutions and find cures, unlike the ongoing process of care. Cure, as disability studies scholar and activist Eli Clare writes, relies on “eradication,” requires prevention and intervention, and creates violence and discrimination. Within this reliance on eradication is also the reiteration of what is “normal” within medicine — normal bodies, normal symptoms, normal treatments. Bodymind is a term originating from Margaret Price and used by scholars such as Clare and Sami Schalk to convey how drawing hard boundaries between the physical and mental are impossible, particularly for groups whose bodies and minds have been marginalized by systems like the medical industry, which relies on classification and categorization. Cure functions through this division, transforming bodymind into strictly body and mind. “Cure promises us so much, but it will never give us justice,” says Clare.¹ The promise of cure in an FDA approved app only functions at the individual level. The emphasis on individual cure eradicates the realities of a world that creates unequal health outcomes for marginalized bodyminds. Yet to oscillate between cure as “bad” and care as “good” would be a purity politics that reflects the logic of cure rather than care. We need to acknowledge the complicated patchwork of cure and care that we live in, which is deeply embedded in modern medicine.

SaMD is built on the rubble of red tape originally seen as policy solutions and a care industry that relies on global inequality in order to generate profit. As early as 2005, the WHO was heralding the arrival of “e-health.” “E-health” accompanied the push for ICT4D (internet communications technology for development). Such technology would address the gaping global health inequities all the while ignoring the broader social and political economic context of who lives with illness and who profits from finding cures. Telemedicine was previously adopted in a range of contexts — whether as a joint venture between the Tohono O’odham tribe in southern Arizona, NASA, and Lockheed, or dengue prevention using cellphones in Peru. Other telehealth initiatives were developed and financed by the US Department of Defense, looking to improve healthcare for soldiers in remote locations — a direct infrastructural answer to America’s imperial project. The incubation of e-health technologies outside the US reflects a longstanding trend. As scholar Adriana Petryna has shown, pharmaceutical companies often cite the stringent and expensive FDA requirements for clinical trials in the US as the reason to look elsewhere for clinical subjects, alongside the willingness of participants in order to fulfill the clinical subjects needed.

Built on shaky foundations, SaMD can further exacerbate inequity. Take, for example, a diabetes prevention app that monitors patients and updates their risk score. These risk scores are subject to the same problems of bias and ethics that plague numerous other algorithmic attempts to calculate social behavior. Scholars like James Doucet-Battle have shown that diabetes risk scoring continues to racialize disease, to the point where care and cure become obfuscated in Western science’s quest to connect race and biology. For patients, SaMD re-inscribes norms surrounding illness, yet people can have different symptoms for different illnesses — there’s no such thing as a “normal” bodymind. Particularly in opioid use disorders, digital apps that treat addiction are built upon a legacy of treatments that are a “hidden but active maintenance of white exclusivity.” SaMD’s level of individualized care which is touted as a strength serves to offload risk onto patients, ignoring social determinants of health. The promise of SaMD relies on accepting wholesale the logic of cure — that if we only have better technical solutions, we’ll be fitter and happier. AI in the care industry reminds that there are numerous vectors for change and action with how we implement AI systems.

Marketed as care but actually an instrument of cure, SaMD creates the distinction between patient (care subject) and a doctor or nurse (care expert). Outside the logic of cure, the logic of care blurs the line between the person cared for and the person providing care — care providers are care subjects themselves, workers in need of care. Training data for SaMD systems is an example of the need for care relations to extend across society. SaMD is technically and materially enabled by rendering care workers into experts, setting up a one-way relationship with the patient. The training data for SaMD systems can draw from biobank data, or more recently upon “data exhaust” of Electronic Health Records (EHRs), with doctors and nurses acting as “data janitors.” Care providers input large amounts of patient data into EHRs which is then used for health algorithms, such as Google’s collaboration with HCA Healthcare.

As Dr. Robert Wachter has emphasized, the amount of stress, care fatigue, and occupational hazard doctors and nurses are put under is further exacerbated by badly designed, faulty medical software systems, particularly EHRs. Care providers need to also be cared for, and their care work is often stifled by a hulking medical industrial complex. Such pressures manifest in numerous ways — such as in 2002 when the staff at Cedars-Sinai Medical Center in LA almost went on strike in response to the roll out of the new EHR system. Other doctors such as Abraham Verghese have been adamant about how crucial physician judgement and the physician is in care, and how automating patient care is not only impossible, but dangerous. While we hope for empathy and patient-first approaches by care providers, there are also numerous accounts and studies on how the subjectivity of patient diagnosis leads to dismissiveness in doctors based on patient race and gender, leading to life-threatening conditions. SaMD promises to provide rational, optimized solutions to medicine while denying the reality of care — that algorithms are built upon the same subjectively created training data and that a diagnosis does not ensure a regime of care.

The data exhaust of EHRs are made possible by systems that prioritize profit over care. As a long-standing paradigm in the US healthcare system, hospitals are incentivized on the quantity of treatment, not the quality, via fee-for-service billing which compensates hospitals for the number of treatments. EHR systems were created not just in hopes of changing paper records to digital forms, but at the core, propelled by the US’s HITECH act, a desire to create more efficient billing and fraud prevention particularly in Medicare, encouraging EHR adoption through billions of dollars in incentive payments.

Cure and care exist in a mosaic, meaning that not all is lost. In order to move away from the logic of cure and towards a logic of actually caring for patient needs, we need to recognize the economic impetus behind machine learning and AI in medicine, and how the political economy of medical AI systems often rely on health inequities to exist. Optimized, “cheap” medical diagnosis is proffered as the solution to current gaps in health. Medical AI exists in a larger system, meaning that policies seemingly unconnected to AI ethics can have enormous consequences for AI systems and changing such policies can have ripple effects on the medical software industry, increasing care and not cure. For example, policy that shifts hospitals from a “fee-for-service” model to a “value-based” model where hospitals and providers are compensated on the quality of care rather than the number of services is just one instance — perhaps expanding midwifery services and at home visits rather than recommending patients download an app.

AI in medicine is often touted as a neutral solution or “tech for good,” and when such systems are critiqued, companies promise safeguards such as human interpretability or bias avoidance. These promises are inadequate, as the builders of AI systems forget that their products exist in larger socioeconomic systems. Such corporate endeavors of medical AI must also be recognized for what they often are — empty vessels for large sums of capital to flow through, benefiting corporations rather than patients and care workers. Moving towards care with AI in medical contexts means approaching problem spaces holistically. Careful deployment of AI in medical contexts is key: for example, rather than using AI as a technical, curative solution for diagnosis, how could AI be used to improve medical billing, supporting patients in navigating arcane insurance practices? The latter is far less splashy, but much more in the realm of care, rather than cure.

Transforming AI systems is not just confined to the realm of engineering ethics conferences. Working from the mosaic of cure and care, we can recognize the ways individuals and industries are situated in broader systems of capitalism, and that there are numerous entry points into transforming the ways AI is shaped, made, and deployed.

References

[1]Clare, Eli. Brilliant Imperfection : Grappling with Cure. Durham, Duke University Press, 2017.