FTC Commissioner Rebecca Slaughter likes to say that “doing something is a policy choice, doing nothing is also a policy choice.”1 “Nothing about pervasive data collection and tracking the shape of social media, or the dominance of a few tech firms, was inevitable,” Slaughter said at the 2024 FTC Tech Summit. “Inaction in the face of those developments was a policy choice. We have the knowledge and experience now to see this era play out differently.” See Federal Trade Commission, FTC Tech Summit, Vimeo, January 25, 2024, https://vimeo.com/907483555. The current AI arms race perfectly encapsulates this predicament for antitrust regulators. Enforcers have expressed frustration and collective guilt that what they have done to contain the growth of Big Tech has been too little, too late. Some fear that what we see unfolding before our eyes in AI is a disaster foretold, with Big Tech grandfathering its grotesque market power into any new paradigm that might emerge around AI.

Whether AI will beget an unprecedented technological revolution, as some foresee, or will turn out to be a hype cycle, as many believe, Big Tech is making sure it uses every one of its massive scale advantages in the relevant assets (chips, compute, data, money) to own that future. There can be little doubt that if AI turns out to be successful and a “game changer,” we will find ourselves in the hands of the very companies we are currently pursuing through desperate post hoc antitrust and regulation—far too late—for having built extractive and exploitative ecosystems through serial acquisitions and “agreements,” as well as a cumulation of self-preferencing, tying, bundling, integration, exclusivity, and more. It’s plain to see: the scale of the investment and effort, the speed at which data centers and cloud assets are being rolled out, dwarfs anything else anyone can hope to achieve.

This is not a Luddist posture. Serious research indicates that scale is a misplaced obsession that is not technically justified, and ultimately only serves to preserve the primacy of the same hyperscalers that dominate the digital space today.2 Gaël Varoquaux, Alexandra Sasha Luccioni, and Meredith Whittaker, “Hype, Sustainability, and the Price of the Bigger-Is-Better Paradigm in AI,” arXiv:2409.14160v1 [cs.CY], September 21, 2024, https://arxiv.org/pdf/2409.14160. We are seeing market power being inexorably protected and projected into the future.

And yet, as we watch massive scale advantage being put in place and cemented, regulators in Europe are not taking a stand. The market-power playbook is unfolding before our eyes, but all we hear from regulators is “we are monitoring” and “we are carefully studying” the issue. Worse, questions are being asked and “agreements” ostensibly “investigated,” but so far the response has been, “There’s nothing to see here—case closed.”

Regulators’ heightened focus on Big Tech acquisitions over the past five years, together with the rise of “killer acquisition” concerns, sensitized Big Tech to the perils and delays of merger review.3 Microsoft/Activision took two years to get through in Europe, with enormous lobbying effort; Facebook/Giphy was blocked; Amazon/iRobot was abandoned; Adobe/Figma, too; Booking/eTraveli was blocked. Acquiring assets in the usual way implies too much perceived regulatory risk. Yet obtaining external assets (“buy versus build”) remains the norm in tech: ecosystems have been built largely by buying (without scrutiny) a number of willing complements, with founders all too happy to go to the beach on a large send-off. This is conventionally portrayed as benign, providing founders with a “natural exit route” which not only puts hundreds of millions in their pocket but is an essential “incentive to innovation.” The motives can be much darker. But whether the objective is to buy up talent and assets it would take too long to develop organically, or to snuff out something perceived as a potential competitor along the way, acquisitions are central to how tech operates.4 Gregory Crawford, Tommaso Valletti, and Cristina Caffarra, “‘How Tech Rolls’: Potential Competition and ‘Reverse’ Killer Acquisitions,” CEPR, May 11, 2020, https://cepr.org/voxeu/blogs-and-reviews/how-tech-rolls-potential-competition-and-reverse-killer-acquisitions. 

The heightened regulatory risk means we are seeing multiple examples of “clever lawyering” that exploit opportunities to present a tie-up in ways that cannot be formally caught by merger rules. Clever lawyering can include exploiting time bars, for instance;5 The Microsoft/OpenAI defense in a nutshell: “You already found we had material influence when we made an initial investment in this company x years ago; now the new (much larger) investment changes nothing in terms of control, so you cannot touch us.” or appearing to make a purely financial investment; or taking no shares in the company but only title to a share of profit distribution; or hiring the team rather than acquiring the company, essentially spoliating the asset and leaving a shell behind. Because the law sets out specific requirements for any transaction to create “a relevant merger situation,”6 In the UK, for instance, it must be the case that “the assets cease to be distinct,” and the parties must be above a particular share of a well-defined “market,” or share of “supply.” companies have enormous leeway to build constructs that are technically within the confines of the law—being careful not to appear to create control structures—and yet just cannot be competitively neutral. In exchange for stacks of money and cheap “compute,” for example, the company gets advance notice of new features and products. This is an advantage. But if the conditions are not met—if assets do not cease to be distinct, or if one party is short of the required turnover thresholds—then regulators will throw up their hands and say, “There’s nothing to see here.” 

This happened recently with Amazon/Anthropic and Microsoft/Mistral in the UK. The Amazon/Anthropic decision7 Amazon funded Anthropic to the tune of $4 billion overall between 2023 and 2024, plus computing capacity. See Competition and Markets Authority, “Amazon.com Inc.’s Partnership with Anthropic PBC: Decision on Relevant Merger Situation,” September 27, 2024, https://assets.publishing.service.gov.uk/media/66f680eec71e42688b65eda0/Summary_of_phase_1_decision.pdf. issued by the Competition and Markets Authority (CMA) in September 2024 concluded that it “did not need to reach a conclusion” on whether the arrangement conferred Amazon “material influence” on Anthropic, simply because the basic threshold for merger control intervention in the UK was not met.8 “In particular, the CMA found that Anthropic’s UK turnover does not exceed £70 million in the UK, nor do the Parties, on the basis of the available evidence, together account for a 25% or more share of supply of any description of goods or services in the UK.” Competition and Markets Authority, “Amazon.com Inc.’s Partnership withAnthropic PBC: Found Not to Qualify Decision,” September 27, 2024, https://www.gov.uk/cma-cases/amazon-slash-anthropic-partnership-merger-inquiry#found-not-to-qualify-decision. In the Microsoft/Mistral decision, the CMA said it “did not believe the parties ceased to be distinct.”9 Competition and Markets Authority, “Microsoft Corporation’s partnership with Mistral AI: Decision on Relevant Merger Situation,” May 17, 2024, https://assets.publishing.service.gov.uk/media/664c6cfd993111924d9d389f/Full_text_decision.pdf Amen. The agency simply threw in the towel.

The Persistence of Old Norms

What’s worse is that even if by some miracle the agency decides there is something to investigate, then cases all fall at the next hurdle: What is the “merger theory of harm” that can identify an issue? In conventional antitrust analysis, mergers and agreements can be problematic if they create a significant share in a well-defined relevant market, such that it can be inferred that market power will be exercised thereafter. This is frankly hopeless. These are merger rules created for an analog world where forward-looking issues about creating the condition for exploiting market power later are just barely catered to. This has of course been the kiss of death for inquiries into past digital cases (from Facebook/Instagram to Facebook/WhatsApp), where targets were incipient or monetization did not take place in the conventional way through a well-defined “price.”

The Microsoft/Inflection10 Microsoft paid $650 million to Inflection to hire key personnel, including two cofounders, in March 2024. Having previously raised $1.3 billion just a few months earlier, including from Microsoft, this was a significant climbdown. example is telling. Having initially decided it would invite Member States to refer the case to it, DG Competition ended the probe in September 2024 in the wake of its court defeat in the challenge to Illumina/GRAIL. Most significantly, the decision on the same case by the CMA makes especially sad reading. Having eventually decided that hiring key personnel and staff from a company while paying off the funders is potentially akin to an acquisition of the asset (what else could it be), the case was closed on the grounds that in a “relevant market” for the “development and supply of consumer chatbots globally” and the “development and supply of foundation models globally,” there would not be material “loss of competition.”11 Competition and Markets Authority, “Microsoft Corporation’s Hiring of Certain Former Employees of Inflection and Its Entry into Associated Arrangements with Inflection,” September 4, 2024, https://assets.publishing.service.gov.uk/media/66d82eaf7a73423428aa2efe/Summary_of_phase_1_decision.pdf.

The problem with all of this is that it predictably goes nowhere. Of course a static estimate of “market shares” as a snapshot today will not generate high enough numbers. Of course looking at revenues and any other conceivable measure of output today is not going to provide any measure of market power. What matters is the control of key inputs, and conventional antitrust analysis that focuses on present outturns goes perfectly nowhere. Regulators need a bold and imaginative posture.

What Theory of Harm?

Antitrust agencies should call it like it is: Big Tech players controlling a set of very large assets (chips, compute, data) are doing deals to combine these key inputs with machine learning in order to move ahead fast and preemptively “occupy” the terrain. This is not entirely new or specific to AI: the extension of market power by leveraging complementary relevant assets has in fact been Big Tech’s playbook for years—swinging capabilities into new spaces to gain first-mover advantage, preempt competition, and suffocate challenges. Antitrust economists have traditionally argued that we need an economic model to “prove” a narrow specific “mechanism” through which market power gets “leveraged” from one place to another; or that “this cannot be bad, in fact it is beneficial, it is efficient to be able to combine complementary assets and develop new services”.

The key is that these ecosystems can marshal their giant assets (including ill-gotten ones, like our data) in unprecedented ways to extend their existing massive power into new applications, preempting others. This very pooling and deploying these inputs aggressively to occupy new spaces ahead of others should be the antitrust theory of harm. Is this exceptionalism? Perhaps, but Big Tech does deserve differential treatment given its past form. And this is absolutely the way to understand what is going on. In “Antitrust Policy and Artificial Intelligence,” Cecilia Rikap also similarly refers to “an ensemble of mechanisms [enabling] cloud hegemons (Microsoft, Google, Amazon) to plan the whole AI knowledge and innovation network by weaponizing interdependence in networks.”12 Cecilia Rikap, “Antitrust Policy and Artificial Intelligence: Some Neglected Issues,” Institute for New Economic Thinking, June 10, 2024, https://www.ineteconomics.org/perspectives/blog/antitrust-policy-and-artificial-intelligence-some-neglected-issues. See also Henry Farrell and Abraham L. Newman, “Weaponized Interdependence: How Global Economic Networks Shape State Coercion,” International Security 44, no. 1 (July 2019): 42–79). The idea of “weaponizing” assets is particularly apt, as “cloud/AI hegemons are focused on ensuring that emerging companies build their architecture and run fully on their clouds”, which “provides a vehicle to affect their architecture decisions and sterilizes their role as real challengers.” 

Regulators faced with these workarounds should not throw up their hands but should instead go boldly forward, arguing “weaponization of scaled complements,” which is the essence of the concern: a small group of hyperscalers and Big Tech firms aggressively using their large-scale assets in ways no one else can, to project their existing power into the future. This has both exclusionary and exploitative connotations in that it forecloses opportunities for alternative states of the world, secures extraction of future rents by today’s giants, and determines the direction of innovation. A coherent case can be articulated along these lines. It won’t please antitrust traditionalists, but it does precisely capture the reality on the ground. We need ongoing policy R&D with theories of harm shaped to map into the real world—not a persistent, self-defeating, cookie-cutter application of obsolete rules.