In this conversation, our guests Amy Kapczynski and Jeremias Adams-Prassl discuss the growing interest in “industrial policy” approaches for the tech sector across the political spectrum. Industrial policy, to take Amy’s definition, refers to “sector-specific policy-making aimed at shaping the economy to meet public goals.”

Our guests offer a critical lens on current usage of the frame in the US and the EU to serve particular industry and ideological interests (for eg. in the so-called US-China AI Arms race; or as the backdrop to flagship regulatory measures in the EU such as the AI Act). They also open up the possibility of industry policy approaches that, instead, center public values. This alternative paradigm for industrial policy might include the intentional reorientation (or even phasing out) of industries that cause public harm, or democratizing industrial policy by giving communities, workers, marginalized groups, and those affected by these industries power in its design.


Amy Kapczynski is a Professor of Law at Yale Law School, Faculty Co-Director of the Law and Political Economy (LPE) Project, cofounder of the LPEblog, and Faculty Co-Director of the Global Health Justice Partnership. Her research focuses on law and political economy, and theorizes the failures of legal logic and structure that condition contemporary inequality, precarity, and hollowed out democracy.

Jeremias Adams-Prassl’s research focuses on technology, innovation policy, and the future of work in the European Union and beyond. He is a Fellow of Magdalen College. His book Humans as a Service (2018) explores the promise and perils of work in the gig economy across the world. Since April 2021, Jeremias has led a five-year, interdisciplinary project exploring the rise of Algorithms at Work, funded by the European Research Council and a 2020 Leverhulme Prize.


  • I’d be curious for you to talk a little bit more about that conceptualization of industrial policy for the tech sector? And, in what ways we might want to draw out that notion of wind down, and not only as the focus being on buildup.
    • Amy Kapczynski: Although people say we didn’t have an industrial policy in the neoliberal era, I think that’s clearly wrong. I think we had industrial policy that was designed to give power to the private sector, and to create private alternatives to public provisioning, to increase private power over the outcomes of sectoral investments.
    • Amy Kapczynski: I think […] industrial policy is and should be about amplifying the sectors we want and winding down the sectors we are concerned about. But concomitant to that we have to also think about public options and public ownership and public equity as means of reversing some of the hollowing out of the state, and the devolution of power to the private sector to do certain things.
  • The frame that you’re describing here seems like it’s a venue through which broader community interests and the public interest could be more adequately represented. What do you see as the path to get closer to that conception of industrial policy, particularly given that the neoliberal current has been so dominant.
    • Amy Kapczynski: To undo the economy that we’ve built and pivot away from neoliberal industrial policy to post-neoliberal industrial policy we would have to think about doing two things. One is building public power over the economy, and the other is building countervailing power. One reason we need to build public power over the economy is to be able to, let’s say, assert these priorities, wind down to this sector, amplification of this sector. But, we also need to do things like better inform the government about sectoral operations and empirics of what’s going on in this sector or that.
  • We’ve recently seen an upswing in discussion around industrial policy in the US, in the EU and elsewhere. And, I’m curious for both of your read on what’s animating this turn to industrial policy?
    • Jeremias Adams-Prassl: This turn we see recently to a flurry of regulation–when you think about the AI Act, the DMA, the DSA, Platform Work Directive, etc, is a quick realization that, even though the GDPR is really quite a young instrument, huge swaths of it are already outdated. Or, if not outdated, at least not capable of facing some of the challenges that we’re now seeing with the rise of AI and associated technologies.
    • Jeremias Adams-Prassl: What can we do going forward to keep trying to strike the balance between protecting citizens and ensuring fundamental rights protection, but also keeping Europe as an innovator and avoiding that sort of stereotype you sometimes get in tech circles that the US makes the technology and the EU regulates the technology.
  • Jeremias, you’ve been doing a lot of thinking around these regulatory approaches. And, I’m curious how they reflect different conceptualizations of industrial policy and some of the distinctions between them?
    • Jeremias Adams-Prassl: So, in the European Union, what we’re seeing at the moment is a really interesting mix of updating existing stuff and coming up with very new norms. Something like the DSA and the DMA, on the one hand I think are really a story of continuity, in terms of updating the e-commerce directive, protecting people against illegal goods services. The DMA, in terms of following up on the Union’s track record, in terms of competition law or antitrust issues. The more novel approach is something like the AI Act, that really tries to regulate artificial intelligence horizontally, across all different areas in which it is deployed. You have essentially this risk-tiering. So, the idea that the certain high- risk applications that can never be deployed at all. And then, the main element of the AI Act, really various areas, from consumer protection, finance, health workers’ rights, where certain rules need to be complied with. And that is, in a one sense, quite a novel approach, because it’s trying to do this omnibus regulation of AI. On the other hand, it’s also quite problematic, because even though it looks like it’s a completely new way of doing things, in fact it harks back to preexisting models of tech regulation. So, you all know the CE sign that you see on lots of batteries and phones and widgets? Well, that goes back to something called the New Legislative Framework, which is the way the EU has for 20, 30, 40 years regulated product safety. But, the fundamental issue is, that AI is not a product. And so, one of the really challenges we’re seeing with the AI Act at the moment, is that it commits what you might want to think of as a category error, trying to use a regulatory model that has been very successful for physical widgets … We don’t have big democratic issues of legitimacy when it comes down to determining the details of the rules. And then, you apply this to AI systems where, of course, the exact opposite is true.

Further Reading: