To meaningfully build worker power, we must create policies to regulate algorithmic management that confront why workplace surveillance is particularly harmful: because algorithmic systems are used to justify unfair decisions that impact workers’ pay, safety, and access to resources, because they are invasive of workers’ private lives, and because they inhibit workers’ ability to organize. 

There is a clear case for bright-line rules that restrain the use of these tools altogether and at minimum create no-go zones around the most invasive forms of surveillance. Such a policy regime could help even out the power imbalances between workers, employers, and the companies that sell these tools.


Algorithmic management is on the rise. Worker-led organizing over the past several years has called attention to how algorithmic management ratchets up the devaluation of work, leads to the deterioration of working conditions and creates risks to workers’ health and safety,1Edward Ongweso Jr, “Amazon’s New Algorithm Will Set Workers’ Schedules According to Muscle Use”, Vice, April 15, 2021; WWRC, “The Public Health Crisis Hidden in Amazon Warehouses”, WWRC, January 14, 2021; Strategic Organizing Center, “Safety and Health at Amazon Campaign”, Strategic Organizing Center; Strategic Organizing Center, “The Injury Machine: How Amazon’s Production System Hurts Workers,” Strategic Organizing Center, April 2022; Strategic Organizing Center, “The Worst Mile: Production Pressure and the Injury Crisis in Amazon’s Delivery System”, Strategic Organizing Center, May 2022. unequally distributes risks and privileges, threatens protected worker-led collective action, and leads to the destruction of individual and worker privacy.2See Athena, “Put Workers over Profits: End Worker Surveillance”, Medium, October 14, 2020; Sara Machi, “’We are not robots’ Amazon workers in St. Peters join international picket on Black Friday”, ksdk, November 25, 2022; Athena, Letter to FTC on Corporate Surveillance, Medium, July 29, 2021. Aloisi and De Stefano, Your Boss is an Algorithm: Artificial Intelligence, Platform Work and Labour (Oxford: Hart Publishing); Karen Levy, Data Driven: Truckers, Technology and the New Workplace Surveillance (Princeton: Princeton University Press, 2023); Pauline Kim, “Data-Driven Discrimination at Work,” William & Mary Law Review 48 (2017): 857–936; and Miranda Bogen and Aaron Rieke, “Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias,” Upturn, December 2018. Policy responses must attend to these calls by confronting the pace and scale of this ramp-up in ways that are attuned to upholding workers’ wages, privacy, autonomy and their right to engage in collective action.3Jeremias Adams-Prassl, Halefom H. Abraha, Aislinn Kelly-Lyth, M. Six Silberman, Sangh Rakshita, “Regulating Algorithmic Management: A Blueprint”, March 1, 2023.

Surveillance of workers and workplaces has ramped up since the start of the pandemic, spurred by the shift to remote work, an increased blurring of work and home, and the integration of workplace technology into personal devices and spaces.4See Jodi Kantor and Arya Sundaram, “The Rise of the Worker Productivity Score,” New York Times, August 14, 2022; Sissi Cao, “Amazon Unveils AI Worker Monitoring For Social Distancing, Worrying Privacy Advocates,” Observer, June 16, 2020; Zoë Corbyn, “‘Bossware Is Coming for Almost Every Worker’: The Software You Might Not Realize Is Watching You,” Guardian, April 27, 2022; Irina Ivanova, “Workplace Spying Surged in the Pandemic. Now the Government Plans to Crack Down,” CBS News, November 1, 2022; and Danielle Abril and Drew Harwell, “Keystroke Tracking, Screenshots, and Facial Recognition: The Boss May Be Watching Long After the Pandemic Ends,” Washington Post, September 24, 2021. While this is occurring across industries and levels of management,5Such techniques have a long history, though what we see at present is an acceleration of these long-standing trends. See for example Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura Dabbish, “Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers,” CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (April 2015): 1603–1612; Wilneida Negrón, “Little Tech Is Coming for Workers: A Framework for Reclaiming and Building Worker Power,” Coworker.org, 2021; Antonio Aloisi and Valerio De Stefano, Your Boss Is an Algorithm: Artificial Intelligence, Platform Work and Labour (Oxford: Hart Publishing: 2022); Alexandra Mateescu and Aiha Nguyen, “Algorithmic Management in the Workplace,” Data & Society, February 2019; Richard A. Bales and Katherine V.W. Stone, “The Invisible Web at Work: Artificial Intelligence and Electronic Surveillance in the Workplace,” Berkeley Journal of Employment & Labor Law 41, no. 1 (2020): 1–62; Ifeoma Ajunwa, Kate Crawford, and Jason Schultz, “Limitless Worker Surveillance,” California Law Review 105, no. 3 (June 2017): 101–142; and Kirstie Ball, “Electronic Monitoring and Surveillance in the Workplace: Literature Review and Policy Recommendations,” Joint Research Centre (European Commission), November 15, 2021. low-wage workers have been at the forefront of the fight to end workplace surveillance and algorithmically-enabled harms. Worker-led organizing brought attention to how algorithmic management is being used for such things as setting workers’ benchmarks and pay,6Tracey Lien, “Uber class-action lawsuit over how drivers were paid gets green light from judge”, Los Angeles Times, February 19, 2018. setting productivity quotas,7Albert Samaha, “Amazon Warehouse Worker Daniel Olayiwola Decided to Make a Podcast About Amazon’s Working Conditions”, Buzzfeed, February 16, 2023, https://www.buzzfeednews.com/article/albertsamaha/daniel-olayiwola-amazon-scamazon-podcast. and making recommendations to hire, promote, demote, and fire workers.8 Albert Samaha, “Amazon Warehouse Worker Daniel Olayiwola Decided to Make a Podcast About Amazon’s Working Conditions”, Buzzfeed, February 16, 2023, https://www.buzzfeednews.com/article/albertsamaha/daniel-olayiwola-amazon-scamazon-podcast.

Companies give many disparate reasons for why they deploy surveillance tech at work, making weakly supported claims that they curb discrimination, offer metrics useful for mid-tier management to demonstrate compliance, and increase the efficiency of certain labor-intensive processes like reading through applicant resumes. But their deleterious effects far outweigh these justifications: algorithmic management ratchets up the devaluation of work, unequally distributes risks and privileges, threatens protected worker-led collective action, and leads to the destruction of individual and worker privacy.9See Aloisi and De Stefano, Your Boss is an Algorithm; Karen Levy, Data Driven: Truckers, Technology and the New Workplace Surveillance (Princeton: Princeton University Press, 2023); Pauline Kim, “Data-Driven Discrimination at Work,” William & Mary Law Review 48 (2017): 857–936; and Miranda Bogen and Aaron Rieke, “Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias,” Upturn, December 2018. What worker-driven efforts underscore is that workplace surveillance has little to do with productivity, efficiency, or safety: fundamentally, these systems are designed for control.

To understand how algorithmic management, worker surveillance, and Big Tech function in concert, we need to interpret them through the same analysis: the ability to leverage information as a source of power.10Matthew Bodie, “Beyond Privacy: Changing The Data Power Dynamics In The Workplace”, LPE Project, July 2, 2023. This is as true in the employer-employee relationship as it is of the power imbalance between tech firms and their users. And in both cases, those with access and control over information accrue benefits at the cost of worker rights, autonomy and dignity. The right path forward is to institute meaningful curbs on surveillance and algorithmic control in workplace settings that rebalance the power discrepancy between workers and employers by upholding worker autonomy and the right to engage in collective action.11Jeremias Adams-Prassl, Halefom H. Abraha, Aislinn Kelly-Lyth, M. Six Silberman, Sangh Rakshita, “Regulating Algorithmic Management: A Blueprint”, March 1, 2023.


Labor Technology Policy: Principles of The Emerging Framework

Worker-specific surveillance concerns are starting to get more policy attention in the US and EU. The newly unveiled Stop Spying Bosses Act would mandate disclosures and institute prohibitions on the collection of data on workers, and would establish a Privacy and Technology Division at the Department of Labor devoted to the regulation and enforcement of workplace surveillance technologies.12Bob Casey, Cory Booker, and Brian Schatz, “Stop Spying Bosses Act of 2023.” The proposed EU Platform Work Directive would afford baseline protections to improve conditions for platform workers, including mandates for worker access to data, algorithmic transparency, and contestability.13European Commission, “Commission Proposals to Improve the Working Conditions of People Working Through Digital Labour Platforms,” press release, December 9, 2021. A proposed California Workplace Technology Accountability Act would create bright-line rules around certain types of algorithmic management and worker surveillance, and includes mandates for impact assessments and transparency measures.14Worker Rights: Workplace Technology Accountability Act, A.B. 1651, California Legislature (2021–2022). The White House-issued Blueprint for an AI Bill of Rights indicates that the enumerated rights should extend to all employment-related systems, including workplace algorithms and workplace surveillance and management systems.15White House, Office of Science and Technology Policy (OSTP), “Blueprint for an AI Bill of Rights,” October 2022, 3. And the US National Labor Relations Board has expressed concerns that employers’ expanded ability to monitor and manage employees may create potential interference with employees’ ability to engage in protected activity and keep that activity confidential from their employer.16See Jennifer A. Abruzzo, National Labor Relations Board General Counsel, “Memorandum GC 23-02,” October 31, 2022.The concerns outlined in Abruzzo’s memo are backed by well-documented evidence; see for example Ari Shapiro, “Amazon Reportedly Has Pinkerton Agents Surveil Workers Who Try to Form Unions,” NPR, November 30, 2020. This growing interest is coalescing around an emerging framework for labor technology policy.


Worker-surveillance policy proposals in the US must bolster collective organizing and collective bargaining. This should underscore the right of workers to engage in protected concerted activity:17National Labor Relations Board, “Concerted Activity,” n.d., accessed March 3, 2023. all workers deserve protection from algorithmic management and workplace surveillance.

While the breadth of proposals is a promising signal, these frameworks need to do more to address power relationships in the labor context, and to tie protections to fundamental rights to autonomy, collective organizing, and collective bargaining. Such frameworks should necessarily remain distinct from, though supportive of, more traditional models of business unions. All workers deserve protection from algorithmic management and workplace surveillance, regardless of whether they are currently members of unions or become union members in the future. Given that these systems impact workers across sectors and at all levels of management, such protections should be broad-based, extending beyond platform-based work to encompass all industry verticals and to include managers. They should include robust whistleblower protections given both the importance of whistleblowing to surfacing tech-enabled harms and the clear pattern of retaliation against workers who do so.18Athena, “Silencing of Whistleblowers in the Workplace is a Threat to Public Health”, Medium, May 1, 2020; Lauren Kaori Gurley, “Amazon Warehouse Workers in Minnesota Walk Off the Job, Protest Alleged Retaliation”, Vice, October 2, 2020. They should include enforcement of existing domains of law to algorithmic management systems, such as using workplace safety laws to curb algorithmic systems that are increasing worker injury rates and labor laws to address just-in-time shift scheduling algorithms that may violate wage and hour laws.19AFL-CIO Technology Institute, “Woodall AFL-CIO Tech Institute Digital Trade Testimony”, November 30, 2022. Lastly, they should provide for collective, not just individual, rights given their importance to worker-led organizing as a core mode of tech accountability.


Worker Organizing and Tech Policy

Tech worker organizing has proven one of the most effective and direct means to curbing tech-enabled harms before they occur. Workers have taken significant risks to engage in collective action, blowing the whistle on harmful technologies and contractual agreements while still in the development. They have also been markedly successful in convincing their employers to drop contracts with military agencies, calling out human rights violations, and agitating for better workplace conditions and worker protections. 

Many of those on the frontlines of this work have experienced retaliation of many kinds. It is for this reason that achieving strong baseline labor protections, including stronger whistleblower protections, is centrally relevant to all domains of tech policy: an accountable, ethical, responsible, and justice-oriented tech sector will only be built upon a base of organized worker power. 

Worker-led pushback faces headwinds on many fronts: the industry is retrenching, instituting layoffs while preserving executive compensation. Even before the current economic climate, tech companies took retaliatory measures against their most vocal internal critics, firing many members of the initial wave of tech worker organizing. Already among the world’s most surveillant companies, tech firms closely monitor workers’ activities: Amazon went as far as to hire Pinkerton operatives – the same private security firm infamously known for union-busting during the 19th and early 20th centuries – to spy on warehouse workers, labor organizers and environmental activists.

This is why policy measures that bolster worker organizing are also key to curbing concentrated power in the tech industry, such as strengthening whistleblower protections, establishing strong curbs on non-disclosure and non-disparagement agreements, and barring employers from using non-compete clauses.


Transparency and data-access measures are necessary but insufficient; the burden should be placed on developers and employers rather than on those harmed by it.

Workplace surveillance is particularly harmful given its opacity. It extends employers’ power over workers and expands the scope of their gaze into the most intimate corners of workers’ lives. To counter this expanded information asymmetry, many proposals for worker data rights regimes focus on transparency and mandated disclosures around the use of algorithmic systems and surveillance in the workplace as a baseline accountability measure.20UC Berkeley’s Labor Center has a robust compilation of policy information about the case for worker technology rights; see Annette Bernhardt, Reem Suleiman, and Lisa Kresge, “Data and Algorithms at Work,” UC Berkeley Labor Center, November 3, 2021. This includes the ways in which workplace surveillance systems are used, their parameters, their data inputs, and their methods of deployment. The goal of these “worker data rights” proposals is to even out the information asymmetries that exist between workers and employers, and in particular to account for the increased power employers gain through the more invasive collection of data enabled by algorithmic systems. The proposals seek to ensure that workers know what kind of data is being collected about them and how it will be used, and that they have a right to access and correct flawed or incorrect data, as well as to contest decisions made about them unfairly. 

The strongest of these proposals dig deeply into the details, mandating that disclosures take place in multiple phases, including prior to the deployment of a system. They also acknowledge explicitly that consent is not a meaningful framework in the context of work, heading off at the pass any presumption that knowledge of a system is tantamount to acceptance of its use given the risk that refusal to use a work-mandated system could lead to retaliation.21One example is the use of AI-powered hiring software. While developments such as Illinois’ Artificial Intelligence Video Interview Act (2020) and New York City’s Local Law 144 compel employers to inform candidates when they are being processed by an AI-powered tool and offer them a viable alternative, candidates may be unwilling to ask for an alternative lest they be seen as a “difficult” or noncompliant candidate. See Illinois General Assembly, “Artificial Intelligence Video Interview Act,” 820 ILCS 42/, 2021; and New York City Council, “Automated Employment Decision Tools,” 2021/144.

While transparency may help balance out information asymmetries that benefit employers, proposed worker data rights regimes otherwise fall short in many ways. First, such policy frameworks fail to grapple with the relationships between workers, employers, and the vendors who provide the software used for algorithmic management and workplace surveillance—who gains power through the deployment of these systems and who loses it. Telling workers that their employer has used algorithmic targeting to systematically lower their wages is no substitute for enacting rules to ensure wages are set fairly and at amounts workers can live on in the first place. And through contractual obligations and claims to trade secrecy, both employers and software vendors are incentivized to resist mandates that require them to provide full access to workers’ data.22See for example Worker Info Exchange, “Managed by Bots: Data-Driven Exploitation in the Gig Economy,” December 2021. Even when all parties are willing to make access available to workers, where to locate the data, and the models trained on it, still present challenging organizational problems that employers may claim they cannot fulfill.23See Will Evans, “Amazon’s Dark Secret: It Has Failed to Protect Your Data,” Wired, November 18, 2021; Information Commissioner’s Office (ICO), “Principle (e): Storage limitation,” n.d., accessed March 3, 2023; and Intersoft Consulting, “Art. 5 GDPR: Principles Relating to Processing of Personal Data,” n.d., accessed March 3, 2023.

Second, policy frameworks that emphasize data access are premised on the existence of external actors who have adequate resources and capacity to make sense of the data. While many groups are rising to meet this need,24See for example Worker Info Exchange; Massachusetts Institute of Technology School of Architecture + Planning, “The Shipt Calculator: Crowdsourcing Gig Worker Pay Data to Audit Algorithmic Management”; and Alex Pentland and Thomas Hardjono, “Data Cooperatives,” April 30, 2020. it’s often the case that those with the context and expertise needed to parse through sometimes staggering amounts of data—workers themselves and their elected representatives, investigative journalists, researchers, and civil society groups—have comparatively fewer resources and time to contribute to such tasks. Moreover, knowledge of harmful corporate practices is not tantamount to asserting power or control over them, but only a first step.25Veena Dubal, “On Algorithmic Wage Discrimination.” SSRN, January 23, 2023; Zephyr Teachout, “Personalized Wages”, SSRN, May 12, 2022. Thus inherent information and power asymmetries must be taken into account as the precondition for any future policy framework—ultimately an approach that utilizes bright-line rules that clearly limit abusive practices would place the onus on those that benefit from algorithmic management, rather than overburdening those who are most likely to be harmed with additional work.


We must establish clear red lines around domains and types of technology that are inappropriate for use in any instance.

Many policy proposals addressing algorithmic management include red lines that establish domains and types of technology that should never be deployed in a workplace context:

  • Clear red lines should be established around obvious private areas that an employer has no right to monitor, such as office bathrooms and employees’ cars.26Worker Rights: Workplace Technology Accountability Act, A.B. 1651. 
  • There should be boundaries around an employee’s functional time, such that monitoring by an employer does not extend into off-duty hours, though in many job contexts this boundary is blurry.27See Worker Rights: Workplace Technology Accountability Act, A.B. 1651; and European Commission, “Proposal for a Directive of the European Parliament and of the Council on Improving Working Conditions in Platform Work,” December 9, 2021. 
  • Certain types of technology, such as emotion recognition, should be prohibited under policy frameworks from being used in an employment context, both because these systems are pseudoscientific28See Information Commissioner’s Office (ICO), “‘Immature Biometric Technologies Could Be Discriminating against People’ Says ICO in Warning to Organisations,” October 26, 2022; Worker Rights: Workplace Technology Accountability Act, A.B. 1651; and Ifeoma Ajunwa, “Automated Video Interviewing as the New Phrenology,” Berkeley Technology Law Journal 36, no. 3 (October 2022): 1173–1226. and because it is inappropriate for an employer to attempt to ascertain a worker’s inner psychological state.29See European Commission, “Proposal for a Directive of the European Parliament and of the Council on Improving Working Conditions in Platform Work.” 
  • Harmful practices such as algorithmic wage discrimination should be banned outright.30See Dubal, “On Algorithmic Wage Discrimination.” 

Purpose limitations and prohibitions on secondary use, such as prohibitions on the sale or licensing of worker data to third parties,31See Worker Rights: Workplace Technology Accountability Act, A.B. 1651; and Lora Kelley, “What Is ‘Dogfooding’?” New York Times, November 14, 2022. are included in many policy proposals that target algorithmic management practices.32Jeremias Adams-Prassl, Halefom H. Abraha, Aislinn Kelly-Lyth, M. Six Silberman, Sangh Rakshita, “Regulating Algorithmic Management: A Blueprint”, March 1, 2023. These serve as an important curb on what the scholar Karen Levy describes as surveillance interoperability, or the mutual reinforcement of worker surveillance through the combination of government data collection, corporate surveillance, and third-party data harvesting.33Karen Levy, “Labor under Many Eyes: Tracking the Long-Haul Trucker,” Law and Political Economy (LPE) Project, January 31, 2023. Workers are exposed to multiple and reinforcing surveillance regimes, and information collected about them in one context can be relatively easily ported to another – for example, companies like Argyle advertise themselves as data brokers for employment data, combining data streams from many sources to provide services such as ‘risk assessments’ about rideshare drivers for insurers.34Argyle. Others, like Appriss Retail, combine data sources such as point of sale transaction data with criminal background information to help retail managers evaluate the risk of any given employee committing fraud.35Appriss Retail. Such systems are riddled with flaws and offer little transparency or recourse to workers when they make what are often consequential mistakes.36See Negron, “Little Tech Is Coming For Workers”

For this reason, curbing these practices requires both placing clear limits on data collection, and ensuring that data (and any models trained on it) cannot have a second life by being ported to another context. These measures could be further complemented through proposed changes to competition law that would inhibit tech firms’ ability to combine streams of data across their many holdings [link to privacy and competition]. 

In comparison to proposals that center data rights and data access, proposals that put forth bright lines and purpose limitations as an enforcement tool will be much more effective at pursuing accountability for the use of algorithmic systems in a workplace context. However, challenges remain around establishing clear definitions, such defining the boundary between work and home, in ways that reflect the lived experience of workers—particularly with the increase in remote work, as well as the use of personal technology devices for work practices.37See Aiha Nguyen and Eve Zelickson, “At the Digital Doorstep: How Customers Use Doorbell Cameras to Manage Delivery Workers,” Data & Society, October 2022; Gabriel Burdin, Simon D. Halliday, Fabio Landini, “Why Using Technology to Spy on Home-Working Employees May Be a Bad Idea,” London School of Economics, June 17, 2020; Personnel Today, “Technology blurs boundaries between work and home for ‘Generation Standby’,” May 20, 2010.


“Human in the loop” policy proposals operate from a flawed perspective on how algorithmic management works in practice and fail to provide meaningful accountability.

Many proposals outline a subset of activities that should be constrained from being fully automated due to the significance of the decision-making involved: these include hiring, promotion, termination, and discipline of workers.38See Worker Rights: Workplace Technology Accountability Act, A.B. 1651; and European Commission, “Proposal for a Directive of the European Parliament and of the Council on Improving Working Conditions in Platform Work.” Some proposals extend this list to any effect of an algorithmic system that has an impact on working conditions, requiring regular review of potential impacts, particularly where they are likely to effect health and safety. These “human in the loop” proposals coalesce around requiring human review of proposed decisions through an algorithmic system, and many also require a right to reversal if a decision is made incorrectly or unfairly.

The framing of these proposals elides the reality that decisions made using AI systems are rarely ever fully automated or fully human, but generally lie on a continuum between the two. Furthermore, incorporating a human operator can actually legitimize, rather than provide protection against, flawed or opaque decision-making via automated systems. Adding a human to the loop may “rubber-stamp” automated decisions rather than increase their nuance and precision.39Ben Green and Amba Kak, “The False Comfort of Human Oversight as an Antidote to A.I. Harm,” Slate, June 15, 2021. There is no clear definition of what would constitute “meaningful” oversight, and research indicates that people presented with the advice of automated tools tend to exhibit automation bias, or deference to automated systems without scrutiny.40Peter Fussey and Daragh Murray, “Policing Uses of Live Facial Recognition in the United Kingdom,” AI Now Institute, September 2020. Lastly, such proposals can lead to a blurring of responsibility, in which the person responsible for human oversight is blamed for systemic failures over which they have little control.41Austin Clyde, “Human-in-the-Loop Systems Are No Panacea for AI Accountability,” Tech Policy Press, December 1, 2021; Niamh McIntyre, Rosie Bradbury, and Billy Perrigo, “Behind TikTok’s Boom: A Legion of Traumatised, $10-a-Day Content Moderators,” Bureau of Investigative Journalism, October 20, 2022; Adrienne Williams, Milagros Miceli, and Timnit Gebru, “The Exploited Labor Behind Artificial Intelligence,” Noema, October 13, 2022. These weaknesses need to be accounted for through greater scrutiny of human oversight mechanisms and by addressing tensions around whether algorithms should be involved in certain decisions at all through bright-line measures. 


Audits may offer tech companies and employers opportunities to mischaracterize their practices, and imply they can be contested when this does not reflect the reality.

Proposals for pre- and post-assessments are mandated by many proposed policy frameworks that address algorithmic management, as well as Data Protection Impact Assessments (DPIAs).42See Worker Rights: Workplace Technology Accountability Act, A.B. 1651; and European Commission, “Proposal for a Directive of the European Parliament and of the Council on Improving Working Conditions in Platform Work.” We have outlined a set of concerns with auditing as a general practice in the Accountability section and these considerations all apply in this context. While audits may have the positive effect of providing a basis for referring cases to governmental agencies that oversee workplace conditions and employment discrimination, such as the Department of Labor, the US Occupational Safety and Health Administration, and the US Equal Employment Opportunity Commission, these referrals could be made on other forms of evidence.

In particular, skewed incentives on the part of employers and software vendors raise real questions around whether auditors could ever gain meaningful access to the information needed to effectively conduct an audit.43See Ada Lovelace Institute and DataKind UK, “Examining the Black Box: Tools for Assessing Algorithmic Systems,” April 2020; and Mathias Vermeulen, “The Keys to the Kingdom,” Knight First Amendment Institute at Columbia University, July 27, 2021. Furthermore, the likelihood that a company would assert trade secrecy over the results of an audit is high, leading to questions around what information from an audit will be made public and who gets to decide. This has already been made plain in the example of the AI hiring company HireVue’s attempt to selectively release results from an independent audit it commissioned in an attempt to prove its software is unbiased. It mischaracterized the breadth of the audit, excluding in particular the facial analysis and employee performance predictions HireVue had been criticized for.44 Alex C. Engler, “Independent Auditors Are Struggling to Hold AI Companies Accountable,” Fast Company, January 26, 2021. This example illustrates how audits can be wielded by companies as a mechanism to feign an interest in accountability while evading more stringent accountability measures. 

Third, determining who has the requisite expertise to conduct an effective audit, particularly in the absence of any industry standards, remains an open question.

At the most fundamental level, auditing is premised on the idea that the use of these systems is meaningfully contestable, which is a poor reflection of the power dynamics between employers and workers. 


Data protection regulations offer inroads to enforcement today, but don’t go far enough.

Recent regulatory efforts focused on data protection largely exempt workplaces, though amendments to the California Consumer Privacy Act took effect in January 2023 that extended its protections to workers at large firms.45 Daniel Geer, Charles P. Pfleeger, Bruce Schneier, John S. Quarterman, Perry Metzger, Rebecca Bace, and Peter Gutmann, “CyberInsecurity: The Cost of Monopoly: How the Dominance of Microsoft’s Products Poses a Risk to Security,” Schneier on Security, September 24, 2003.  Primarily based on the understanding that the consent frameworks central to most data protection policy break down in the context of work, workplace exceptions result in a major carveout under data protection law. In practice, this has meant that in the United States workers are being left out of the wave of interest in privacy regulation, even as surveillance of workplaces is undergoing significant expansion. 

Conversely, a handful of regulations in the European context are being used very effectively by unions and worker collectives to sue for information on algorithmic management. For example, the App Drivers and Couriers Union has sued Uber under the EU General Data Protection Regulation (GDPR) to lay claim to rights over the data and algorithms used to determine rideshare drivers’ pay.46 See Dubal, “On Algorithmic Wage Discrimination,” 10; and Natasha Lomas, “Ola Is Facing a Drivers’ Legal Challenge over Data Access Rights and Algorithmic Management,” TechCrunch, September 10, 2020.

Researchers argue that existing EU data protection regulations already provide tools to enable a stronger enforcement posture toward the use of worker surveillance and algorithmic management.47 Antonio Aloisi, “Regulating Algorithmic Management at Work in the European Union: Data Protection, Non-Discrimination and Collective Rights,” International Journal of Comparative Labour Law and Industrial Relations, forthcoming, accessed March 3, 2023. Such proposals largely turn to the GDPR, EU AI Act, and CCPA to assert that many of the above policy measures are already accounted for in existing law. For example, the GDPR already requires notice to data subjects when they are involved in algorithmic decision-making and profiling, though it remains unclear whether firms using algorithmic management systems are doing so.48 (GDPR Art 13(2)(f) and Art 14 (2)(g); see also Intersoft Consulting, “Art. 13 GDPR: Information to Be Provided Where Personal Data Are Collected from the Data Subject,” n.d., accessed March 3, 2023; and Intersoft Consulting, “Art. 14 GDPR: Information to Be Provided Where Personal Data Have Not Been Obtained from the Data Subject,” n.d., accessed March 3, 2023.  It is also likely that the GDPR’s Article 35 requirements necessitating the use of a Data Protection Impact Assessment would apply to uses in the workplace context, and that a DPIA should be conducted both prior to deployment of an algorithmic decision-making system at work and iteratively thereafter.49 Aloisi, “Regulating Algorithmic Management at Work in the European Union.”

But even here, policy measures that originate from data protection laws largely fall into the trap outlined above in which the burden is placed on those harmed by these systems rather than those who benefit from them. Courts have reinforced these weaknesses by concluding that the GDPR is designed for transparency related to violations of the law, rather than to achieve broader objectives such as ensuring workers are generally informed of what information is collected about them.50 Dubal, “On Algorithmic Wage Discrimination,” 46. A new generation of “data minimization” policy measures advocated for by civil society organizations and legislative proposals offer greater promise. These include stronger restrictions on the collection of specific types of data (eg. biometrics) or restrictions on use of such data in certain types of contexts (eg. workplaces, schools, hiring), but it remains unclear to what extent some of these would apply to the work or employment context.


Building Worker Power through Tech Policy

Given that worker surveillance protections would likely develop independently of more general privacy regulations in the United States, there is an opportunity to move away from the traditional data protection paradigm and to formulate worker surveillance laws that attend more effectively to information asymmetries, while retaining the provisions that make the most sense (purpose and collection limitations and rights to access data).

First, these measures could focus on what can be inferred about workers using algorithmic methods—their outcomes—rather than solely focusing on how data is collected and what happens to that data. This is particularly important given the shift toward first-party profiling activities by tech firms, reliance on anonymization, differential privacy and synthetic production of data, and increased ability to draw inferences using machine learning techniques without direct tracking of individual data at all. Such an approach could include the use of data that is anonymized but still used to harm workers as a collective. One example of this is the collection of workplace data that allows companies to try to predict which stores and locations are most likely to unionize and prepare or preemptively deploy union-busting techniques.51 See Sarah Kessler, “Companies Are Using Employee Survey Data to Predict—and Squash—Union Organizing,” OneZero, Medium, July 30, 2020; and Shirin Ghaffary and Jason Del Rey, “The Real Cost of Amazon,” Vox, June 29, 2020. 

Second, any future-looking rights framework should proceed from an impact-oriented definition of data that includes contexts of collective (rather than individualized) harm. Such an approach would more effectively provide protection against interference or oversight of protected concerted activity, bringing algorithmic management policy in line with baseline labor rights protections. Rather than focus on securing data as the object of analysis, policy proposals could adopt an approach that foregrounds how data and algorithmic models are instrumentalized as a mode of control and means of eroding autonomy and collective power: fundamentally, the issues at hand are not about data, but about the relationships and structural inequities between workers and employers. 

Further Reading