This post reflects on and excerpts from our most recent report: Regulating Biometrics: Global Approaches and Urgent Questions.
The proliferation of biometric surveillance technology in schools, protests, criminal trials, and as a condition to access welfare has led to widespread calls to introduce new laws, update existing ones, pause these systems, or outright ban their use. The list of regulatory developments this year alone is large and growing:
- In January, the Kenyan High Court ordered the temporary suspension of its National Integrated Management System (NIMS) biometric ID project due to privacy concerns. Despite this, the Kenyan government recently told the press that they would continue with “mass registration” of people into the database.
- In March, Scotland passed Legislation to create a Biometrics Commissioner, who will oversee how policing bodies take, store, use and dispose of data such as finger-prints, DNA samples, and facial images.
- In July, New York State voted to pause any implementation of facial recognition technology in schools.
- In August, a UK appeals court ruled that police use of facial recognition technology has “fundamental deficiencies” and violates several laws. In the U.S., Senators Jeff Merkley and Bernie Sanders announced the National Biometric Information Privacy Act, which would require companies to get written consent from people in order to collect their biometric data.
- Just this month, Portland, Oregon became the first U.S. city to ban the use of facial recognition technology inside privately owned places accessible to the public (e.g., in stores, restaurants, homeless shelters, senior centers, doctors’ offices, and more).
While the Portland ban is one of the most comprehensive legislative efforts to date, policymakers and advocates are still scrambling to figure out what it means to effectively regulate a rapidly advancing set of technologies.
Meanwhile, the companies that profit from these technologies are acting to mitigate, and potentially undercut or postpone, demands for prohibition or strict regulation. Microsoft and Amazon have released calculated public statements in support of facial recognition regulation. Despite IBM, Microsoft, and Amazon committing to pause their use of these technologies, activists have responded by reminding legislators that these voluntary gestures were not nearly enough: “Facial recognition, like American policing as we know it, must go.”
Yet even if regulation bans the government use of specific technologies, there is a rapidly growing number of private uses, many of which raise similar concerns of exclusion and discrimination. Unless governments model their laws off the recent efforts in Portland, Oregon, the technology is still “going to be used at the Taylor Swift concert in very similar ways to the ways in which cops would use it: to keep people out, to discriminate against people. It doesn’t stop the machine.”
As several governments commit to regulate these technologies, Ada Lovelace Institute Director Carly Kind has pointed out that “The regulatory and policy framework for governing the use of biometric data has been outpaced by advances in the technologies that enable such data to be used, whether by private companies or public bodies. Reform is both necessary and urgent, and needs to be informed by independent, impartial and evidence-led analysis.”
Despite this, laws are often introduced as a band-aid to pacify criticism of inherently problematic systems. For example, when faced with the potential of being invalidated by the highest courts, data-privacy rules have repeatedly been held up as an adequate safeguard for the concerns regarding biometric systems, leading to widespread skepticism about the role these laws play. In the case of India’s nationwide biometric ID project (Aadhaar), legislation that authorized and regulated the project came nearly a decade after biometric data collection began.
As Amba points out in her interview with Karen Hao (MIT Technology Review), we should not think of regulation as “just as a tool that will help in limiting these systems. It can be a tool to push back against these systems, but equally it can be a tool to normalize or legitimize these systems…At the moment when we’re really pushing to say ‘Do these technologies need to exist at all?’ the law, and especially weak regulation, can really be weaponized.”
Indeed, some of the largest technology companies that develop and sell these systems to law enforcement have been deeply engaged in legislative processes, often publicly championing the need for some “regulation” but simultaneously lobbying against moratoria and bans. For example, Microsoft celebrated Washington State’s SB 6280 (“Finally, progress on regulating facial recognition,” Brad Smith, the company’s general counsel, announced), only to face questions and criticisms about their involvement in pushing through a law that was considered weak by many organizations, and that effectively undercut a potential ban on government use.
All these issues point to why we published our recent Compendium, Regulating Biometrics: Global Approaches and Urgent Questions. With a growing need for strong and effective regulation, AI Now worked with academics, advocates, and policy experts to reflect on the promise, and the limits, of the law. With a diverse mix of countries and types of regulation, our hope is that these prove a useful tool to inform ongoing national policy and advocacy efforts to regulate biometric recognition technologies.
Have questions, or want to engage further on the issue of biometric regulation? Reach out to [email protected] with any inquiries or feedback around the Compendium.
As part of the Compendium (see page 18), we’ve outlined the key questions we need to ask and answer in the coming years, pointing to research, regulation, and community engagement that will be needed to inform ongoing national policy and advocacy efforts:
The Data-Protection Lens
- How should regulation define “biometric data”?
- Why have data protection laws had limited effectiveness in curbing the expansion of biometric surveillance infrastructure by government?
- Is meaningful notice and consent possible in the context of biometric systems? What are the limitations of a consent-based approach and what supplements or alternatives might be required?
Beyond Privacy: Accuracy, Discrimination, Human Review, and Due Process
- How should regulatory frameworks address concerns about accuracy and non-
discrimination in biometric systems?
- To what extent should regulation rely on standards of performance and accuracy set by technical standards-setting bodies?
- Does requiring “meaningful human review” of biometric recognition systems ensure oversight and accountability?
- Should regulatory frameworks create a risk-based classification between “identification” and “verification” uses of biometric recognition? What are the potential risks of a permissive regulatory approach to verification?
- What kinds of due process safeguards are required for law enforcement use of biometric recognition? Should law enforcement have access to these systems to begin with?
- Are systems that process bodily data for purposes beyond establishing individual identity, like making inferences around emotional state, personality traits, or demographic characteristics covered under existing biometric regulation?
- Should such systems be permitted at all, given their contested scientific foundations and mounting evidence of harm?
Emerging Regulatory Tools and Enforcement Mechanisms
- What different types of “bans” and moratoria have been passed in the US over the past few years?
- How can moratoria conditions be strengthened to ensure that eventual legislative or deliberative processes are robust?
- How will bans and moratoria on government use impact the private development and production of biometric systems?
- What regulatory tools can be used to create public transparency around the development, purchase, and use of biometric recognition tools?
- What role can community-led advocacy play in shaping the priorities and impact of regulation?