Illustration by Somnath Bhatt

Concentrated power and economic embeddings in ML & AI

A guest essay by Bernhard Rieder, Giovanni Sileno, and Geoff Gordon.

Bernhard is Associate Professor of New Media and Digital Culture at the University of Amsterdam. His research focuses on the history, theory, and politics of software.

Giovanni is a Senior Researcher at the Informatics Institute of the University of Amsterdam, investigating computational regulatory systems and artificial cognition.

Geoff is a Senior Researcher in international law at the Asser Institute, researching governance issues at the interface of technology, security and economy, lately focusing on quantum technologies.

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.


Regulators in the EU and US have recently drawn attention to the market power and monopolistic behavior of big tech firms. Lawmakers in the EU argue that ‘traditional businesses are increasingly dependent on a limited number of large online platforms’ and that these ‘gatekeepers’ leverage their privileged position to stifle competition and enter new markets at a rapid pace (EPRS 2020). Striking a similar tone, lawmakers in the US claim that these companies have ‘abused their dominant positions, setting and often dictating prices and rules for commerce, search, advertising, social networking and publishing’ (Kang and McCabe 2020). Yet these arguments tend to focus mostly on the conduct and market position of online platforms, while ignoring the underlying technologies by which they operate. But what if techniques like machine learning (ML) are themselves factors in the continuous expansion of already oversized tech giants? If this is the case, contemporary AI will further cement the dominance of a small number of companies and countries, exacerbating global inequalities in economic power and control over central technological resources and affordances. In this essay, we draw attention to the relationship between market power and these technological factors. We argue that the critical debate about potential social harms of AI needs to be mindful of ‘the seductive diversion of “solving” bias in artificial intelligence’ (Powles & Nissenbaum 2018) and include broader societal stakes such as market concentration and monopolization as matters of concern.

The argument for increased scrutiny of the relationship between technological characteristics and economic outcomes relies on the recognition that technologies have implications and consequences that exceed the questions of how and to what ends they are being used. They may ripple through societies in unanticipated ways, producing lasting structuring effects beyond their immediate sphere of application. One such effect is how technologies impact on the economic organization of societies. The burning of fossil fuels as a source of energy, for example, has not only changed how people travel and goods are produced, but has had profound consequences for power structures within societies as well as relationships between countries (Mitchell 2011) — not to mention the far-reaching repercussions of environmental destruction. Some of these effects are due to social and political forces, but others are due to the material properties involved, for example how fossil fuels are extracted, processed, and delivered. In the case of oil, the ability to dominate a limited number of material choke points in the production and distribution process led to the emergence of vertically integrated monopolies that required deliberate state intervention to break up. Traditionally, monopolies are seen as problematic because they may lead to rising prices for consumers, diminished product quality, or negative repercussions on labor conditions. While these things may not necessarily hold true for tech companies offering many ‘free’ products, critics have argued that the new ‘data-opolies can actually be more dangerous than traditional monopolies,’ because they ‘affect not only our wallets but our privacy, autonomy, democracy, and well-being’ (Stucke 2018).

More recently, researchers and activists have started to voice concerns that the technologies behind AI may further exacerbate the trend toward monopolization in the tech sector and the problems that come with it. Large tech firms like Amazon, Facebook, Google, and Microsoft in the US and Alibaba, Baidu, and Tencent in China, have invested heavily in algorithmic capacities, often rebranding themselves as ‘AI companies’ in the process. Ahmed and Wahed (2020), for example, have shown how the former group of companies has come to dominate research output in computer science, particularly around deep learning, leading to a process of ‘de-democratization’ in knowledge production. In a similar vein, Riedl (2020) has argued that the recent trend toward larger and larger models (e.g. GPT-3 and beyond) raises barriers of entry. This has put companies like OpenAI (a billion dollar company started by Elon Musk, Sam Altman, and others, with Microsoft among its current stakeholders) in the position of a ‘de facto arbiter of ethics and morality with regard to the deployment of AI services’ (Riedl 2020, n.p.) when they decide which projects to approve and what to define as ‘misuse.’

Crucially, however, this emerging ‘political economy of AI’ (Srnicek 2018) is not simply an effect of anti-competitive measures or other ‘bad’ behavior (though one can certainly find many examples of problematic conduct). ML in particular thrives on the availability of large quantities of data, ample compute capacities, competent personnel, and a user base that can serve both as a source of feedback and as a market. Large tech companies have been particularly apt at acquiring these and other forms of ‘digital capital’ (Tambe et al. 2020) over the last decades, putting them in a uniquely favorable position in the race for the most powerful technologies. Different kinds of network effects and economies of scale are at work in this situation, including the well-known advantages large companies have in terms of average cost per customer. But there are also more specific factors that come into play, which merit a more detailed discussion.

To build a ML-driven application, typically engineers will use data to train a model using input-output bindings. A simple example of this could be a system that uses manually labeled emails to create a spam filter. More complex examples involve connecting behavioral data to certain desirable outcomes, such as ‘good’ search results or content recommendations. The accuracy of the model-generated output in service provision creates incentives for its use. The more the service is used, the more input-output bindings (positive and negative) may be available for further training, again improving accuracy, attracting users, and so on. The value generated from this feedback loop can, for example, be spent to collect more data, to employ more competent personnel to improve methods, or to create models for generating synthetic data. This will again increase accuracy, and an early advantage can turn into a dominant position in the market.

The most powerful players in AI have also pursued an aggressive acquisition strategy over the last years to further expand breadth of service but also to keep potential competitors at bay. While mergers and acquisitions are common across fields of enterprise, the situation is particularly acute in the AI and ML space, where a small handful of massive firms continuously buy up a remarkable number of new participants as they emerge. While in other fields that feature regular mergers — banking and insurance, medical services, telecommunications, media, etc. — markets are already regulated, this is not the case for the AI and ML industry. As a consequence, and similar to what happened in the fossil fuel sector, large tech companies have moved towards vertical integration in an attempt to control the full value chain. At the bottom, this concerns hardware, such as AI-specific microchips, as well as software frameworks that are often open-sourced to initiate and leverage communities of practice and dedicated technical expertise. At the top, this manifests in growing portfolios of end-user services, including services for other companies, in particular through cloud computing. The result is a dense and tightly integrated ecosystem of technologies, expertise, and business synergy that is difficult for newcomers to compete in, even if they are not immediately bought out.

Another unique feature of AI that may contribute to monopolization is its wide applicability. Since AI is often described as a set of general-purpose technologies that can be used for very different tasks, there is great potential for cross-market expansion and thus further economic benefit for dominant companies. We have seen this logic of ‘concentric diversification’ (Thompson & Strickland 1978) play out over at least the last decade as large internet companies have expanded from their core business — search, online retail, social networking — into many new areas of activity, leveraging their established user base, technological expertise, computational resources, and huge data pools. AI promises to further facilitate their expansion to new sectors of the economy. Consider for instance GPT-3, a very large language model built and commercialized by OpenAI. Although still in beta, it has been used for machine translation, style variation, authoring text, powering chat-bots, simplifying complex legal documents into plain English, and even generating programs in a given programming language. Combining GPT-3 with a corpus of text–image pairs, OpenAI has constructed DALL·E, a model that can generate images from text captions, opening up to yet another set of possible applications. The sheer size of these so-called ‘foundation models’ makes them flexible to use, but they are also difficult and costly to replicate and maintain. The trend towards ‘bigger is better’ is thus again a driver of market concentration.

The risks of monopolization are varied. While the traditional yardstick for harm in competition law, at least in the US, is how much a company charges to consumers over its costs, technology companies and products may require a much broader outlook. For example, how would we measure whether Facebook is a monopoly if most of its products are free to consumers? While network effects promise lower prices for consumers, the reliance on a small set of actors can create relations of dependency that may not only translate into economic and political pressure, but also prove more costly in the long run whether or not these actors decide to raise prices after all.

Furthermore, AI components are becoming increasingly important in public and private service provisions, in embedded devices, in critical infrastructures, and elsewhere. Concentrated control over the ‘means of production’ — the means to build both applications relying on AI technologies and the AI technologies themselves — guarantees wide-ranging influence on the specific workings of these components as well as the overall development of AI as a technological field. More specific concerns about privacy and bias, but also about effects on democratic agency and individual autonomy, are circumscribed by economic power and technological affordances — which always entail political power — and interventions on these issues are therefore largely dependent on how the larger ecosystem is organized. That ecosystem is global in scope. Considering that most of the dominant players in the field are currently located in the US and China, the geopolitical stakes around concentrated control over technological capabilities cannot be ignored. If AI is controlled by a small number of companies and countries, other nations and even whole continents risk facing diminished autonomy in the digital domain. Political issues of privacy, agency, and self-determination may become further dependent on conditions decided by select actors in privileged locations.

However, there are also a number of initiatives working toward a more distributed future for AI and adjacent technologies. While legal scholars are debating new standards for ‘antimonopoly’ measures that explicitly include democratic ideals (e.g. Khan 2018), governments in Asia (e.g. the MeghRaj cloud initiative in India) and Europe (e.g. the transnational GAIA-X project) are seeking to gain some level of independence through publicly funded cloud infrastructures. Grass-roots projects such as EleutherAI seek to create open source alternatives to proprietary models. But many countries simply do not have the autonomy, resources, or political capacity to sustain similar efforts, whether they are legal, economic, or led by civil society. What we learn from an examination of the intersection between technological features and economic embeddings is that the fight against monopolization in AI is necessarily an uphill battle that will require transversal efforts to overcome a situation that heavily favors incumbents. Lawmakers and civil society should take the specific character of AI technologies into account in the regulatory and activist efforts currently under way.


References

Ahmed, Nur, and Muntasir Wahed. 2020. “The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research.” ArXiv:2010.15581 [Cs], October. http://arxiv.org/abs/2010.15581.

European Parliamentary Research Service. 2020. “Regulating Digital Gatekeepers. Background on the Future Digital Markets Act.” Report PE 659.397. https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/659397/EPRS_BRI(2020)659397_EN.pdf

Kang, Cecilia, and David McCabe. 2020. “House Lawmakers Condemn Big Tech’s ‘Monopoly Power’ and Urge Their Breakups.” The New York Times, October 6, 2020. https://www.nytimes.com/2020/10/06/technology/congress-big-tech-monopoly-power.html

Khan, Lina. 2018. “The New Brandeis Movement: America’s Antimonopoly Debate.” Journal of European Competition Law & Practice 9 (3): 131–32. https://doi.org/10.1093/jeclap/lpy020.

Mitchell, Timothy. 2011. Carbon Democracy: Political Power in the Age of Oil. London, New York: Verso.

Powles, Julia, and Helen Nissenbaum. 2018. “The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence.” OneZero, December 7, 2018. https://onezero.medium.com/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53.

Riedl, Mark. 2020. “AI Democratization in the Era of GPT-3.” The Gradient, September 25, 2020.

Srnicek, Nick. 2018. “Platform Monopolies and the Political Economy of AI.” In Economics for the Many, edited by John McDonnell, 152–63. London, New York: Verso.

Stucke, Maurice. 2018. “Here Are All the Reasons It’s a Bad Idea to Let a Few Tech Companies Monopolize Our Data.” Harvard Business Review, March 27, 2018. https://hbr.org/2018/03/here-are-all-the-reasons-its-a-bad-idea-to-let-a-few-tech-companies-monopolize-our-data

Tambe, Prasanna, Lorin Hitt, Daniel Rock, and Erik Brynjolfsson. 2020. “Digital Capital and Superstar Firms.” w28285. Cambridge, MA: National Bureau of Economic Research. https://doi.org/10.3386/w28285.

Thompson, Arthur A., and Alonzo J. Strickland. 1978. Strategy and Policy: Concepts and Cases. Dallas: Business Publications.