This piece is part of Reframing Impact, a collaboration between AI Now Institute, Aapti Institute, and The Maybe. In this series we bring together a wide network of advocates, builders, and thinkers from around the world to draw attention to the limitations of the current discourse around AI, and to forge the conversations we want to have.

In the run-up to the 2026 India AI Impact Summit, each piece addresses a field-defining topic in AI and governance. Composed of interview excerpts, the pieces are organized around a frame (analysis and critique of dominant narratives) and a reframe (provocations toward alternative, people-centered futures).

Karen Hao is a bestselling author and award-winning reporter who covers artificial intelligence. She was the first journalist to profile OpenAI and wrote a book, Empire of AI, about the company and the AI industry.

In this interview, Hao argues that framing Global South countries as “data rich” reinforces colonial dynamics by enabling the extraction of data, minerals, and labor, thereby facilitating increased inequality and exploitation. Hao shows how tech giants are using historical empire playbooks to promote a resource-intensive model for AI development—a model, she says, that the Global South should reject. She cautions against the emerging model of corporations “taking the data and trying to sell it back” to communities. Instead, she highlights examples of specialized, small-scale AI projects that better care for local data and prioritize community needs, cultural preservation, sovereignty, and public interest over extraction.

Following is a lightly edited transcript of the conversation.

FRAME: The mainstream discourse positions underresourced regions and communities as “data rich” as a way to signal their strategic importance and provide a way out of poverty. In practice, though, this perpetuates colonial dynamics and puts the Global Majority on a familiar path to extraction and exploitation.

It may feel pragmatic to offer data in exchange for a seat at the table—but in fact it repeats history and increases inequality.

The idea of [Global South countries as “data rich”] perpetuates colonial dynamics. Wealthy countries get to come into poor countries and extract whatever resources they want, not just data—also critical minerals, all the things that you need to build data centers, and the labor. But it’s always to contribute to the strengthening of rich countries’ economies, to help them get richer. Why are we doing that? Why are we repeating history on this front and increasing inequality around the world, under the guise of supporting a technology that’s supposed to be an equalizer and leveling the playing field globally?

There is a “pragmatic” understanding that [Global South countries] want to have a seat at the table. The way to do that is to make themselves indispensable in some way to the global supply chain of AI development. It is really hard to figure out when you don’t have capital, what to offer instead—and the easiest thing at that point is to offer your people, your minerals, and your data. I absolutely recognize and sympathize with the position that they’re in because one of the reasons why they’re only able to offer these things is because of the history of colonialism dispossessing them of the strong economic growth that is experienced by Global North countries today. At the same time, I wish there was a more expansive idea in general about how all countries should be brought to the table, not just based on them having to offer themselves up to extraction and exploitation.

Keoni Mahelona and Peter-Lucas Jones, the two people that run Te Hiku Media, always say: “Data is the last frontier of colonization. They took our land and then tried to sell it back to us. Now they’re taking our data and trying to sell it back to us as a service.” Poorer countries are trying to upload their data to the “mother ship” as a way to buy their seat at the table. But what they’re actually doing is ceding even more power to the richer countries to lord over them and set the agenda.

Silicon Valley is replicating the playbook of historical empires—via dispossession, narrative control, and quasi-religious elements.

What we’re seeing today with the way that Silicon Valley is orchestrating the development of AI systems is pretty much how empires of old operated. They’re consolidating an extraordinary amount of economic and political power by dispossessing the [global] majority of their resources, their land, their labor, their data. Even as that labor is contributing to the expansion of the empire and accruing more value to the empire, they are not seeing it themselves. 

Also, empires engage in control of information flows where they try to—either through soft or hard ways—make the narrative such that they can continue to do what they want and censor inconvenient truths that undermine their imperial agenda. We see the industry engage in that through controlling what science is ultimately produced about the fundamental capabilities or limitations of AI systems. 

The last important dimension to recognize is that there’s a quasi-religious element to the push for the AI empire. There is this narrative that these companies are the “good empire” on a civilizing mission to bring progress and modernity. So there’s this moralizing tenet, but also it’s undergirded by fears around AI potentially going rogue and devastating humanity—and that is why they need to have supreme control over the development of this technology. Because if it falls into the hands of the “bad actor,” that could be total obliteration of the human race.

REFRAME: Hao proposes abandoning inevitability with large-scale AI, demonstrating that this isn’t the only trajectory available to us, and we must trade “one-size fits all” models for community based approaches.

Global South countries can and should reject the extractive vision set out by Silicon Valley.

One of the things that I feel quite strongly about is that a lot of countries are trying to figure out how to plug into a game that’s already been defined by the US and by Silicon Valley—which is the large-scale AI model game that requires an extraordinary amount of resources.

But that’s not the only conception of AI that could exist. There are plenty of other types of AI models that do not require a large-scale amount of resources or capital. There are specialized, small-scale, localized AI systems that have been around for much longer than ChatGPT and large language models. To me, any country that’s not the US or China should not be trying to think, “How do we insert ourselves into what Silicon Valley is trying to do and try to mirror or replicate or take inspiration from this large-scale approach to AI development?” They should be thinking about, “What are the opportunities that AI could help unlock in our country? What are the problems that we actually need to solve?” And then, “What are the types of AI systems that we should be developing independent of the vision that is being exported from San Francisco that is localized to the resources and the culture and the challenges that we actually have?”

Within the African continent, for example, there’s been a long history of this with Deep Learning Indaba and Masakhane. These organizations have thought, “How do we design AI by Africans for Africans? How do we use these tools as ways to preserve our language rather than to continue eroding them away and having English dominate? How do we help farmers increase their agricultural crops by supporting the increased resilience of the electric grid?” I think that’s actually much more visionary.

AI systems that serve the public interest are smaller and more specialized—and built in close consultation with the communities they are built for.

In general, I want to see smaller models, more specialized models, and more models that are developed with more participation from the communities that will actually be using them or will have the models used on them. 

An example that I often give is the nonprofit Climate Change AI. They’ve documented all of these different ways that specialized AI models can actually help advance and tackle very specific aspects of the climate-change-mitigation problem. None of them have anything to do with large-scale AI models. They’re all about things like improving grid resiliency, improving renewable energy integration on the grid, optimizing the energy demand of a building or even a city, optimizing traffic, optimizing supply chains.

AI can be useful for specific cases—but only when designed with the needs and values of the community at the center.

The other example is an organization called Te Hiku Media (mentioned earlier) in Aotearoa New Zealand. They are a radio station nonprofit that broadcasts in the language of the Māori people and have for decades. As part of the broader movement to try to revitalize the te reo Māori language, which was almost lost because of colonization policies, they thought they could open up this rich archive for te reo Māori learners so that they could listen to the sounds of their elders, especially those who predated colonial distortions of the language. 

It ended up being the perfect example of a moment in which AI could be really useful to transcribe audio—especially since there are not that many te reo Māori speakers who have that level of advanced language skill, and have the time to do that work. They developed a speech recognition tool, and in developing it, they took a fundamentally different approach from the norms in the tech industry. 

The entire life cycle of how they approached the development of this tool is exactly what I wish we would see all around the world. They were constantly in communication with their community to make sure that the technology was wanted, that they were designing it in ways that were actually delivering benefit, and that they were also preserving the values of the community. For example, the te reo Māori community—like many indigenous communities—really, really cares about their sovereignty. So, a huge pillar of the project was—regardless of what we do in this project—we will never undermine the sovereignty of our community by then giving that data to Big Tech. 

Those values might be different for different communities, but that community should then decide what they want and what they want to uphold. By contrast, the one-size-fits-all model of AI development is inherently so problematic and colonial. If we can shift away from that to thinking about a multitude of small, specialized, localized models that are much more controlled and governed by each individual community, I think we will be in a much more democratic place with AI development.


Watch the full conversation between Karen Hao and Alix Dunn here.

Research Areas