AI Now Launches “Regulating Biometrics: Global Approaches and Open Questions”
September 02, 2020
Amid heightened public scrutiny, interest in regulating biometric technologies like face and voice recognition has grown significantly across the globe, driven by community advocacy and research. Advocates continue to remind developers, profiteers, and those using and regulating these biometric systems that the future course of these technologies must––and will–– be subject to greater democratic control. The next few years are poised to produce wide-ranging legal regulation in many parts of the world that could alter the future course of these technologies. Addressing this moment of possibility, AI Now worked with academics, advocates, and policy experts to publish a Compendium of case studies on current attempts to regulate biometric systems, and reflect on the promise, and the limits, of the law.
Edited by Amba Kak, AI Now’s Director of Global Strategy & Programs, the compendium begins with an introduction and a summary chapter which identifies key themes from existing legal approaches, and poses open questions for the future. These questions highlight the critical research needed to inform ongoing national policy and advocacy efforts to regulate biometric recognition technologies.
The full compendium is available here, and the eight individual case studies are described and linked below.
Jake Goldenfein (Melbourne Law School) and Monique Mann (Deakin University) track the institutional and political maneuvers that resulted in Australia building a large centralized facial recognition database (“The Capability”) for use by a range of government actors. They examine the failures of regulation to meaningfully challenge the construction of this system, or to even shape its technical or institutional architecture.
Nayantara Ranganathan (lawyer and independent researcher, India) explains how law and policy around India’s Biometric ID (“Aadhaar”) project eventually served to construct biometric data as a resource for value extraction by private companies. She explores how regulation was influenced by the logics and cultures of the project it sought to regulate.
Els Kindt (KU Leuven) provides a detailed account of the European Union’s General Data Protection Regulation (GDPR) approach to regulating biometric data. As many countries are set to implement similarly worded national laws, she identifies potential loopholes and highlights key areas for reform.
Reflecting on the International Committee of the Red Cross’s Biometric Policy: Minimizing Centralized Databases
Ben Hayes (AWO Agency, Consultant legal advisor to the International Committee of the Red Cross [ICRC]) and Massimo Marelli (Head of the ICRC Data Protection Office) explain ICRC’s decision-making process to formulate its first biometrics policy, which aimed to avoid the creation of databases and to minimize risks to vulnerable populations in humanitarian contexts.
Peter Fussey (University of Essex) and Daragh Murray (University of Essex), lead authors of the independent empirical review of the London Metropolitan Police’s trial of Live Facial Recognition (LFR), explain how existing legal norms and regulatory tools failed to prevent the proliferation of a system with demonstrated harms. Through this, they draw broader lessons for the regulation of LFR in the UK and similar technologies elsewhere.
Jameson Spivack and Clare Garvie (Georgetown Center on Privacy and Technology) write about the dozens of bans and moratoria legislation on police use of facial recognition in the US, providing a detailed taxonomy that goes beyond these broad categories, and lessons learned from their implementation.
Woodrow Hartzog (Northeastern University) explores the promise and pitfalls of the State of Illinois’ Biometric Information Privacy Act (BIPA) and, more broadly, of the right for private citizens to bring their own actions against private companies. He questions the inevitable limits of a law that is centered on “informed consent,” a system that gives the illusion of control while justifying dubious practices that people do not have enough time or resources to understand and act on.
Stefanie Coyle (NYCLU) and Rashida Richardson (Rutgers University, AI Now Institute, NYU) examine the controversial move by a school district in Lockport, New York to implement a facial and object recognition system to surveil students. They highlight the community-driven response that incited a national debate and led to state-wide legislation regulating the use of biometric technologies in schools.