In early 2024, New America brought together experts in international relations, computer science, and technology policy to share their thinking on how governments and institutions should navigate AI to harness its strengths and mitigate its risks. Below is AI Now’s contribution; for the full set of interventions, see here.

Concentrated Industry Power Is Shaping AI

By Sarah Myers West, managing director of the AI Now Institute and formerly a senior advisor on AI at the U.S. Federal Trade Commission

AI as we know it today is a creation of concentrated industry power. A small handful of firms not only control the resources needed to build AI systems—the cloud infrastructures, data, and labor needed to construct AI at scale—they have also set the trajectory for AI development by influencing AI research for over a decade, increasingly defining the career incentives in AI research, metrics of prestige at leading conferences, and what counts as the leading edge for AI innovation. Today’s AI boom is driven at its core by the legacy of the surveillance business model, and its incentive structures are shaped by the existing infrastructural dominance of the small handful of firms who pioneered it. This is what is driving the push to build AI at larger and larger scale, increasing the demand for resources that only Big Tech firms can provide and further cementing these companies’ considerable advantage.

Understanding these dynamics is particularly important for conversations about global governance: The economic power amassed by these firms exceeds that of many nations, and they’ve demonstrated a willingness to flex that muscle when needed to ensure that policy interventions do not perturb their business objectives. This leads to challenging questions for regulators: Can any single nation amass sufficient regulatory friction to curb unaccountable behavior by large tech firms? If so, how? What is the appropriate role for global governance bodies to play? In the absence of a globally coordinated effort to regulate AI, companies have largely been able to set a self-regulatory tone, leveraging fragmentation to create their own forums for standard setting that become the de facto center for industry governance.

There is nothing inevitable about the trajectory for this technology: It remains open to change. AI has meant different things over the course of an almost 70-year history, from expert systems to robotics to neural networks and now large-scale AI. Effective national regulatory enforcement in combination with coordinated global governance processes could have a particularly important role to play in redirecting away from this current status quo toward more beneficial public goals and interests.