Despite mounting evidence of harm and untested scientific claims, biometric systems are still quietly proliferating and embedding themselves in new domains like cars and the metaverse. 

Existing policy approaches based on data protection law, like the GDPR, have proven to be ineffective in preventing some of the most egregious kinds of biometric systems. In this environment, comprehensive bright-line prohibitions on collection and use are key to future-proof policy interventions. 


Biometric technologies are infiltrating new markets like automobiles, workplaces and virtual reality, but they are not always labeled as such. Often relying on flawed technology for purposes it’s not well-designed for, the industries making widespread use of biometrics are nevertheless depending on them for sensitive and inappropriate decision-making, such as evaluating a worker’s productivity or a driver’s attentiveness.

Biometrics continue to be quietly embedded in software and hardware across a number of domains where many members of the public encounter them daily, without necessarily knowing they are there or consenting to their use.1See Kelly A. Gates, Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance (New York: NYU Press, 2011); Alexandro Pando, “Beyond Security: Biometrics Integration Into Everyday Life,” Forbes, August 4, 2017; and Rob Davies, “‘Conditioning an Entire Society’: The Rise of Biometric Data Technology,” Guardian, October 26, 2021. 

The technologies are being used across a wide variety of industries and in diverse contexts, for example, the in-cabin monitoring systems used to track delivery drivers integrates emotion recognition systems that claim to monitor driver “attentiveness” and keep tabs on potentially aggressive behavior.2See Karen Levy, Data Driven: Truckers, Technology, and the New Workplace Surveillance (Princeton: Princeton University Press, 2022); and Lauren Kaori Gurley, “Amazon’s AI Cameras Are Punishing Drivers for Mistakes They Didn’t Make,” Motherboard, September 20, 2021; Zephyr Teachout, “Cyborgs on the Highways”, The American Prospect, December 8, 2022. Some remote productivity monitoring software uses eye-movement data collected via webcams to ostensibly monitor employee attention.3Darrell M. West, “How Employers Use Technology to Surveil Employees,” Brookings Institution, January 5, 2021. Call center employees’ voices, and those of their customers, are tracked using emotion recognition to monitor for changes in tone and pitch that indicate increased levels of anger and frustration, used to propose canned responses for the call center worker.4See, e.g., “The Stakes of Human Interaction Have Never Been So High,” Cogito, n.d., accessed March 3, 2023. One provider of “mobile neuroinformatics solutions” purports to measure workers’ cognitive state and then provide feedback on their ‘cognitive performance and needs’.5Cynthia Khoo, “Re: Notice of Request for Information (RFI) on Public and Private Sector Uses of Biometric Technologies — Comments of Center on Privacy & Technology at Georgetown Law”, January 15, 2022. Across these examples, bodily signals are being used as a proxy to detect characteristics such as “attentiveness” and “aggression,” often without clear evidence they are fit for the purpose.6Luke Stark and Jevan Hutson, “Physiognomic Artificial Intelligence,” Fordham Intellectual Property, Media and Entertainment Law Journal 32, no. 4 (2022): 922–978.

The science does not support the commercialization of many such systems,7See Lisa Feldman Barrett, Ralph Adolphs, Stacy Marsella, Aleix M. Martinez, and Seth D. Pollak, “Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements,” Psychological Science in the Public Interest 20, no. 1 (2019); and Lisa Feldman Barrett, “Inferring Emotions from Physical Signals,” Request for Information (RFI) on Public and Private Sector Uses of Biometric Technologies: Responses, Federal Register Notice 86 FR 56300, January 15, 2022. For a comparative lens on Chinese-developed technologies with similarly flawed scientific validity, see also “Emotional Entanglement: China’s Emotion Recognition Market and Its Implications for Human Rights,” Article 19, January 2021. with plenty of evidence indicating that they lack reliability and validity for the purposes they are currently being used for.8Luke Stark and Jesse Hoey, “The Ethics of Emotion in Artificial Intelligence Systems,” FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (March 2021): 782–793. Evidence also suggests that they can lead to discriminatory effects.9See Rosa Wevers, “Unmasking Biometrics’ Biases: Facing Gender, Race, Class and Ability in Biometric Data Collection,” TMG Journal for Media History 21, no. 2 (2018): 89–105; Natasha Lomas, “UK Watchdog Warns against AI for Emotional Analysis, Dubs ‘Immature’ Biometrics a Bias Risk,” TechCrunch, October 26, 2022; Barrett, Adolphs, Marsella, Martinez, and Pollak, “Emotional Expressions Reconsidered”; Sanjana Varghese, “The Junk Science of Emotion-Recognition Technology,” Outline, October 21, 2019. But instead of a reduction in use following significant attention to these harms, we see proposals being unveiled for even more far-reaching uses of hypothetical systems: for example, a video recently shown at Davos touted the purported uses of “brain-wave tracking” to encourage workers to be more focused and productive.10Hamilton Nolan, “A World in Which Your Boss Spies on Your Brainwaves? That Future Is Near,” Guardian, February 9, 2023. 

The automotive industry is another key front through which biometrics are being embedded without necessarily being labeled as such. The company Affectiva, which became infamous for releasing an emotion recognition API that claimed to read interior emotional states from facial expressions,11“How It Works,” Affectiva, accessed March 3, 2023. has pivoted following its acquisition by SmartEye,12“Smart Eye Acquires Affectiva to Solidify Stronghold on Interior Sensing Market,” Business Wire, May 25, 2021. a provider of driver monitoring systems and eye-tracking technology, to focus on in-cabin monitoring and automotive technology.13“Driver Monitoring System: Intelligent Safety Features Detecting Driver State and Behavior,” Smart Eye, accessed March 3, 2023. Its systems claim to use cameras and sensors to detect fatigue and distraction through collecting data via in-vehicle cameras, an increasingly crowded space occupied by a handful of companies offering similar technology such as Seeing Machines,14 “Automotive: We Exist to Get Everyone Home Safely,” Seeing Machines, accessed March 3, 2023. Cerence,15Cerence, accessed March 3, 2023. and Eyeris.16Eyeris, accessed March 3, 2023. 

Such uses carry all of the flaws outlined in the previous section and then some, such as what it will mean for drivers if their “attentiveness” or “aggression” data is shared with insurers or law enforcement authorities.17Gautham Nagesh, “Eye-Tracking Technology for Cars Promises to Keep Drivers Alert,” Wall Street Journal, September 9, 2016. And unique features of the automotive technology ecosystem indicate there are additional concerns to be raised around how Big Tech firms are positioned to benefit from their use. For years, carmakers sought to keep their distance from Big Tech companies, opting instead to contract with smaller companies or develop their own technology in-house to retain greater control over the lucrative streams of data that can be collected on drivers.18Jacob Kastrenakes, “Why Carmakers Want to Keep Apple and Google at Arm’s Length,” Verge, January 13, 2017. This has changed: automakers are entering into multiyear partnerships with Big Tech firms that enable deep integration with car hardware systems.19See Omer Keilaf, “Automakers Partner With Tech Companies To Drive Supply Chain Innovation,” Forbes, July 28, 2020; Alex Koster, Aakash Arora, and Mike Quinn, “Chasing the Software-Defined Dream Car,” Boston Consulting Group (BCG), February 18, 2021; “Tech giants boost partnerships in auto sector,” Automotive News, September 12, 2019. For example, Google’s Android has become so dominant in the auto ecosystem that the industry standards group the Connected Vehicles Systems Alliance announced it is working to create international standards for car software integration with Android.20Leah Nylen, “Big Tech’s Next Monopoly Game: Building the Car of the Future,” Politico, December 26, 2021.

Concerns over this rapid expansion by Big Tech companies into the automotive sector are articulated in several letters sent from Congress to FTC Chair Lina Khan and DOJ Assistant Attorney General Jonathan Kanter asking their respective agencies to intervene given the risk that this data could be abused.21See Elizabeth Warren to Lina M. Khan and Jonathan Kanter, November 1, 2022; and Jamie Raskin to Lina Khan and Jonathan Kanter, April 1, 2022. Moreover, a letter signed by 28 civil society and advocacy organizations urged Congress to act to ensure Big Tech firms are not able to expand their dominance to the automotive market.22Accountable Tech, American Economic Liberties Project, American Family Voices, Athena, Atwood Center, Blue Future, Demand Progress, Fight for the Future, Institute for Local Self-Reliance, International Brotherhood of Teamsters, IronPAC, Jobs with Justice, Libraries without Borders, Main Street Alliance, Media Alliance, Ocean Futures Society, Open Media and Information Companies Initiative, Organic Consumers Association, The Other 98%, Our Revolution, People’s Parity Project, Progress America, Public Citizen, Regeneration International, Revolving Door Project, RootsAction.org, Surveillance Technology Oversight Project, and United We Dream to Amy Kobuchar, David N. Cilcilline, Jonathan Kanter, and Lina Khan, January 25, 2022, https://s3.amazonaws.com/demandprogress/images/Big_Tech_Auto_Letter.pdf. As elsewhere, strong curbs on the expansion of biometric technologies in the automotive sector would have beneficial impacts on curbing the expansion of concentrated tech power. We see a similar pattern playing out in the augmented/virtual reality market, where companies like Meta are well positioned to build on their data advantage through the influx of a much wider range of new bodily information about consumers made possible through the addition of hardware such as headsets.23Veronica Irwin, “Meta Is Looking into Eye-Tracking and Product Placement to Make Money in the Metaverse,” Protocol, January 18, 2022; Tom Wheeler, “If the Metaverse Is Left Unregulated, Companies Will Track Your Gaze and Emotions,” Time, June 20, 2022; “Privacy and Autonomy in the Metaverse,” Princeton University Library, video, 1:04:12, November 15, 2022.


Policy frameworks directed at biometric surveillance should ensure they are future-proof against these changing forms and use cases of biometric data. This entails defining biometric systems to explicitly include those designed for inference or analysis (even when they don’t uniquely identify the user).

While affect recognition appears particularly ripe for a strict ban, there is a rising drumbeat of consensus in favor of prohibiting the use of biometric systems wholesale, given the unjustifiable risks associated with any collection and storage of biometric data. 

Across these examples of workplace and automotive uses of biometric technology, it’s clear that expansion is continuing but taking on new form: integration of “facial recognition” technologies is no longer the headline when biometric systems are being deployed, and instead these systems are being described as “safety features” or methods for measuring “productivity.”24See Stephanie Condon, “Google Expands Virtual Cards to American Express Customers,” ZDNET, February 7, 2023; and “The Benefits of Biometric Technology for Workplace Safety,” Work Health Solutions, accessed March 3, 2023. They are also being deployed using methods that may not be immediately obvious or apparent to consumers, and in contexts in which consent is essentially meaningless. 

This confusion has policy implications: it allows these emergent systems to escape regulatory scrutiny. Biometric data is widely accepted as a category of “sensitive personal data,” and subject to stricter standards of consent and proof of necessity compared to other kinds of personal data. However, most existing legal approaches to regulating biometrics adopt a narrow definition of the term that is conditional on the ability and use of the bodily information to confirm or establish a person’s official identity.25AI Now Institute, Regulating Biometrics: Global Approaches and Urgent Questions, September 2020. Some technical literature uses the term “soft biometrics” to define the process of “categorizing information about bodily traits where a person may not be identified in the process.”26See Unsang Park and Anil K. Jain, “Face Matching and Retrieval Using Soft Biometrics,” IEEE Transactions on Information Forensics and Security 5, no. 3 (September 2010): 406–415; and Antitza Dantcheva, Petros Elia, and Arun Ross, “What Else Does Your Biometric Data Reveal? A Survey on Soft Biometrics,” IEEE Transactions on Information Forensics and Security 11, no. 3 (March 2016): 441–467. On the one hand, many of these newer systems rely on data (such as iris scans or voice data) that could theoretically be used to confirm or establish identity even though their purpose is more oriented toward evaluation (for, eg., eye tracking or voice capture in the automobile or in an AR/VR context).27See Khari Johnson, “Meta’s VR Headset Harvests Personal Data Right off Your Face,” Wired, October 13, 2022; and Janus Rose, “Eye-Tracking Tech Is Another Reason the Metaverse Will Suck,” Motherboard, March 10, 2022. On the other hand, a range of data signals may not be able to uniquely identify an individual on their own but still reveal potentially sensitive inferences about a person, and should be afforded higher levels of protection.28Examples include heart rate monitoring, perspiration, and gait tracking, among others.

Evolving policy approaches must adapt to this market evolution. The White House Office of Science and Technology Policy Request For Information on biometric technologies, for example, helpfully defines the term biometrics beyond identification to include technologies exclusively directed at the “inference of emotion, disposition, character, or intent” and specifically cites keystroke patterns as an example.29“Notice of Request for Information (RFI) on Public and Private Sector Uses of Biometric Technologies,” Federal Register, October 8, 2021.

This underscores the importance of developing future-proof definitions of biometrics as well as bright-line rules that make clear where certain contexts of use are inappropriate and where certain categories of technology should not be available for commercial development in any instance. One area that is already ripe for such a bright-line prohibition is emotion or affect recognition: public views on affect recognition have largely soured in response to research documenting the many failures of emotion-recognition systems to live up to the claims companies are making about them.30See Kate Crawford, “Artificial Intelligence Is Misreading Human Emotion,” Atlantic, April 27, 2021; Angela Chen and Karen Hao, “Emotion AI researchers say overblown claims give their work a bad name,” MIT Technology Review, February 14, 2020; Jeremy Kahn, “HireVue Drops Facial Monitoring amid A.I. Algorithm Audit,” Fortune, January 19, 2021; and Kyle Wiggers, “New Startup Shows How Emotion-Detecting AI Is Intrinsically Problematic,” VentureBeat, January 17, 2022. Advocacy organizations have called for affect recognition to be explicitly banned by the EU’s upcoming AI Act under the highest risk category of “unjustifiable risks.”31Access Now, European Digital Rights (EDRi), Bits of Freedom, Article 19, and IT-Pol, “Prohibit Emotion Recognition in the Artificial Intelligence Act,” May 2022. And the UK Information Commissioner’s Office (ICO) also issued a warning against the use of these systems, highlighting that the risks of using emotion recognition outweighs the opportunities.32Information Commissioner’s Office (ICO), “‘Immature Biometric Technologies Could Be Discriminating against People’ Says ICO in Warning to Organisations,” October 26, 2022. In a recent statement, the ICO’s deputy commissioner, Stephen Bonner, said: “As it stands, we are yet to see any emotion AI technology develop in a way that satisfies data protection requirements, and have more general questions about proportionality, fairness and transparency in this area.”33Ibid. Recognizing these policy headwinds, in June 2021, Microsoft announced it would stop providing “open-ended API access” to emotion-recognition technology based on “the lack of scientific consensus on the definition of ‘emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.”34See James Vincent, “Microsoft to Retire Controversial Facial Recognition Tool That Claims to Identify Emotion,” Verge, June 21, 2022; Natasha Crampton, “Microsoft’s Framework for Building AI Systems Responsibly,” Microsoft (blog), June 21, 2022.

Alongside a slew of city-and-state-level bans targeting law enforcement use of facial recognition in the US,35Jameson Spivack and Clare Garvie, “A Taxonomy of Legislative Approaches to Face Recognition in the United States,” Regulating Biometrics: Global Approaches and Urgent Questions, AI Now Institute, September 2020. a louder chorus of voices supports a ban on biometrics in particular domains or use cases, such as the collection of biometrics from children,36Lindsey Barrett, “Ban Facial Recognition Technologies for Children—And for Everyone Else,” Boston University Journal of Science and Technology Law 26, no. 2 (2020): 223–285. in educational settings,37Nila Bala, “The Danger of Facial Recognition in Our Children’s Classrooms,” Duke Law & Technology Review 18, no. 1 (2020): 249–267. and for certain types of biometrics deemed “high risk” in the workplace.38Worker Rights: Workplace Technology Accountability Act, A.B. 1651 (California Legislature, 2021–2022 Regular Session), January 13, 2022. In the EU, there is momentum from advocacy organization around prohibiting “biometric mass surveillance”39The Greens, “Fighting for a Ban on Mass Surveillance in Public Spaces”; Access Now, “Ban Biometric Surveillance”, June 7, 2022. mechanisms such as live facial recognition systems used by law enforcement or mass scraping to build biometric databases like Clearview AI. 

But with more than a decade of advocacy around the potential harms of biometric systems, including recent high-profile incidents that underscore the unjustifiable and potentially devastating consequences of bodily data being misused or weaponized against individuals and communities,40See Eileen Guo and Hikmat Noori, “This is the real story of the Afghan biometric databases abandoned to the Taliban,” MIT Technology Review, August 30, 2021; and Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It,” New York Times, January 18, 2020. there’s greater momentum around a more comprehensive ban on the creation and use of such databases.41See Access Now, “Open Letter Calling for a Global Ban on Biometric Recognition Technologies That Enable Mass and Discriminatory Surveillance,” June 7, 2021; and Algorithm Watch, “Open Letter Calling for a Global Ban on Biometric Recognition Technologies That Enable Mass and Discriminatory Surveillance,” 2021. While the dangers of allowing facial recognition has received particular attention42See Luke Stark, “Facial Recognition Is the Plutonium of AI,” XRDS: Crossroads, the ACM Magazine for Students 25, no. 3 (Spring 2019): 50–55, and Evan Selinger and Woodrow Hartzog, “What Happens When Employers Can Read Your Facial Expressions?” New York Times, October 17, 2019. given the ability to capture face data “in the wild,” and the ubiquity of face images on the web, these arguments increasingly apply to other biometrics like voice or even gait (how a person walks) and, importantly, to commercial contexts in addition to law enforcement use.