Broad data minimization principles (“collect no more data than necessary”), a core part of data privacy laws like the EU’s GDPR, have been woefully underenforced and given too much interpretive wiggle room. 

But the next generation of data minimization policies—bright-line rules that prohibit excessive or harmful data collection and use—show greater promise. Championed by a growing chorus within civil society, these data rules could be a powerful lever in restraining some of the most concerning AI systems (and even the business model that sustains them).

Broad data minimization principles (“collect no more data than necessary”) are a core part of global data privacy laws like the GDPR, but have been woefully underenforced.

Data privacy policy approaches have evolved considerably over the past decade. The abject failure of the “notice and consent” model—which mandates that data collection is broadly permissible provided the user has been notified and given consent—as the primary way to protect people’s privacy is now a mainstream critique.1See Federal Trade Commission, “Trade Regulation on Commercial Surveillance and Data Security,” 16 CFR Part 464, 2022; Neil Richards and Woodrow Hartzog, “The Pathologies of Digital Consent,” Washington University Law Review 96, no. 6 (2019); and Claire Park, “How ‘Notice and Consent’ Fails to Protect Our Privacy,” New America (blog), March 23, 2020.The dominant legal privacy regime globally, led by the European GDPR, retains consent as a way to legitimize data processing in certain instances but also imposes baseline standards on firms’ data processing activities that apply irrespective of what the user “chooses.”2 Intersoft Consulting, “Art. 5 GDPR: Principles Relating to Processing of Personal Data,” n.d., accessed March 3, 2023.

Data minimization is the umbrella term increasingly used to refer to some of these core obligations. Aimed at limiting the incentives for unbridled commercial surveillance practices, these include (1) restrictions on what data is collected (collection limitations), (2) the purposes for which it can be used following collection (purpose limitations), and (3) the amount of time firms can retain data (storage limitations).3Ibid. See also “Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the Protection of Natural Persons with Regard to the Processing of Personal Data by the Union Institutions, Bodies, Offices and Agencies and on the Free Movement of Such Data, and Repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC,” Official Journal of the European Union, November 21, 2018; and These rules require firms to demonstrate the necessity and proportionality of the data processing—to prove, for example, that it is in fact necessary to collect certain kinds of data for the purposes they seek to achieve; or to state that they will only use such data for predefined purposes; or to ensure that they will only retain data for a period of time that is necessary and proportionate to these purposes. Typically, certain types of data classified as “sensitive” receive a heightened level of protection, for example, a stricter standard of necessity for the collection of biometric data with fewer exceptions.4See Intersoft Consulting, “Art. 9 GDPR: Processing of Special Categories of Personal Data,” n.d., accessed March 15, 2023; and Information Commissioner’s Office (ICO), “What Are the Rules on Special Category Data?” n.d., accessed March 15, 2023. In the US, data minimization rules are a part of the California Privacy Rights Act,5“California Consumer Privacy Act (CCPA),” Rob Bonta, Attorney General, State of California Department of Justice, February 15, 2023. which came into force in January 2023 and is also a core part of a proposed federal privacy law, the American Data Privacy and Protection Act, that has gained widespread momentum.6American Data Privacy and Protection Act, H.R. 8152, 117th Congress (2021–2022).

These broad data-minimization principles offer a clear shift away from consent or control-based approaches. They shift the burden away from individuals having to make decisions or proactively exercise their data rights, and onto firms to demonstrate their compliance with these principles in the interests of users.7David Medine and Gayatri Murthy, “Companies, Not People, Should Bear the Burden of Protecting Data,” Brookings Institution, December 18, 2019. They also create clear curbs on the kinds of invasive data processing companies are otherwise incentivized to engage in under the behavioral advertising business model.

Despite their strong potential, in practice, these standards (now a part of data protection laws like the GDPR and more than a hundred counterparts around the world8Graham Greenleaf, “Global Data Privacy Laws 2019: 132 National Laws & Many Bills,” Privacy Laws & Business International Report 157 (2019).) haven’t had the kind of structural impact they promise. A key reason for this is the inherent ambiguity in interpreting the legal standards of necessity and proportionality encoded in these principles,9See Josephine Wolff and Nicole Atallah, “Early GDPR Penalties: Analysis of Implementation and Fines through May 2020,” Journal of Information Policy 11 (December 2021); and Information Commissioner’s Office (ICO), Big Data, Artificial Intelligence, Machine Learning and Data Protection, September 4, 2017. which, combined with overburdened enforcement agencies,10Access Now, “Access Now Raises the Alarm over Weak Enforcement of the EU GDPR on the Two-Year Anniversary,” press release, May 25, 2020. leaves companies a great deal of leeway in how to apply (or likely evade) these requirements. The enforcement of data minimization throws up fundamentally thorny questions: Does maximizing advertising revenue qualify as a reasonable business purpose? If so, does it justify virtually limitless data collection for behavioral advertising? How far can security justifications stretch to legitimize indefinite data retention? These issues are far from resolved despite almost a decade of enforcement. There have been far and few notable exceptions where data minimization principles in the GDPR have been enforced to successfully draw bright-line rules against certain kinds of data processing. In one example, the Swedish Data Protection Authority outlawed the use of facial recognition in schools on the basis of the collection limitation principle, finding that its use for monitoring attendance was a disproportionate means to achieve this goal.11European Data Protection Board, “Swedish DPA: Police Unlawfully Used Facial Recognition App,” February 12, 2021.

A range of new data minimization proposals move toward specific restrictions around excessive or harmful data practices, such as restricting targeted advertising or banning the collection of biometric data in certain domains.

A new iteration of data-minimization rules could overcome these challenges by moving beyond high-level normative standards (as in the GDPR) to specific restrictions around particular types of data and kinds of data use. Bold proposals have been surfaced in the US by civil society and in legislative proposals, including restricting the use of data for targeted advertising,12See Accountable Tech, “Accountable Tech Petitions FTC to Ban Surveillance Advertising as an ‘Unfair Method of Competition’,” press release, September 28, 2021; Electronic Privacy Information Center (EPIC) and Consumer Reports, How the FTC Can Mandate Data Minimization through a Section 5 Unfairness Rulemaking, January 2022; Accountable Tech, “Ban Surveillance Advertising: Coalition Letter,” 2022, accessed March 15, 2023; and In the Matter of Trade Regulation Rule on Commercial Surveillance and Data Security, R111004, Before the Federal Trade Commission, Washington, D.C., November 21, 2022 (statement of Center for Democracy & Technology). or a narrower version that limits the use of sensitive data for all secondary purposes, including advertising;13Ada Lovelace Institute, Countermeasures: The Need for New Legislation to Govern Biometric Technologies in the UK, June 2022. restricting the collection and use of biometric information for particular groups such as children;14Lindsey Barrett, Ban Facial Recognition Technologies for Children—and for Everyone Else, 26 B.U. J. S CI . & T ECH . L. 223 (2020) and in certain contexts such as workplaces, schools, and hiring.15See Worker Rights: Workplace Technology Accountability Act, A.B. 1651, California Legislature (2021–2022); Sofia Edvardsen, “How to Interpret Sweden’s First GDPR Fine on Facial Recognition in School,” International Association of Privacy Professionals (IAPP), August 27, 2019; European Data Protection Board, “EDPB & EDPS Call for Ban on Use of AI for Automated Recognition of Human Features in Publicly Accessible Spaces, and Some Other Uses of AI That Can Lead to Unfair Discrimination,” June 21, 2021; and “When Bodies Become Data: Biometric Technologies and Free Expression,” Article 19, April 2021.

These proposals clarify bright-line rules when it comes to data collection and use. Some, like the ban on using data for behavioral advertising, are justified as both pro-privacy and pro-competition interventions since they target first-party data collection that is currently concentrated among Big Tech companies.16Accountable Tech, “FTC Rulemaking Petition to Prohibit Surveillance Advertising,” 2022, accessed March 15, 2023.

The proposals also could effectively shut down some of the most concerning uses of AI—for example, by placing restrictions on the collection of emotion-related data or restricting biometric data collection in the workplace (which fuels a range of algorithmic surveillance and management tools). In these ways, the next generation of data minimization rules could be a powerful lever in addressing some of the most concerning AI systems and the business model that sustains them.