
In the lead-up to this year’s India AI Impact Summit, we attempted to pre-bunk a new kind of AI hype that was circulating. We observed that the “right” words were being used to have the wrong conversations.
Impact lingo like “AI for Good”, “AI for climate”, “human capital”, and “frugal AI” evokes ideals of public interest and accountability. But in practice, it is being deployed to justify a familiar vision – a “machine god” future that thrives on environmental exploitation, mass exclusion, and modern imperialism. Terms like “open source”, “sovereignty”, “accountability,” and “democratization” are stripped of their histories and community connections to serve as sales pitches.
Emptied of meaning and severed from their communities, abstract commitments to impact had little effect on outcomes. Narrative slipperiness moved the focus away from public interest or accountability. The result was a preference for spectacle over meaningful impact.
Both industry and government-focused commitments were vague and focused on adoption angles. On the sidelines of the summit, we witnessed the announcement of new investments into data centers and Indian alignment with the US’s Pax Silica.
Going into the summit, we outlined the question that was top of mind for civil society: Is it worth trying to (re)claim the concepts being promoted at events like the summit, or is it better to reject them entirely? We proposed a third way: reframing.
The language of impact may provide a narrow opening – if it can be wrested away from the hype. Even as we critique cynical co-option, many of the underlying values are still worth fighting for. The idea that technology can genuinely help people is ground we must defend.
Twelve interviews with leading critical voices, including Karen Hao, Timnit Gebru, Audrey Tang, and Meredith Whittaker, offered concrete, alternative visions that counter ambiguity with specificity and unsubstantiated claims with grounded realities. Here are our reflections on strategies for resisting and responding to “impact hype”:
Keep questioning: demand evidence, challenge current narratives, and make room for alternatives.
Promises of all the good that AI can do – for development, workforces, the climate – remain unsubstantiated, or rest on mountains of assumptions and abstractions. Their detachment from reality comes into focus when we ask clear and pointed questions – and highlight where there is a gap between what is promised and what is delivered. For example, we have seen traction in juxtaposing abstract ideas of the “future of work” with the stories of data workers’ experiences. Localized resistance to data centers, as seen in examples from Chile, states across the US, and Canada, surface the trade-offs and impacts to people and land in the face of lofty promises of economic transformation.
In a time of “vibes-based” policymaking, we need more than ever to generate and surface empirical evidence. This work can expose the limits of current AI approaches and point to where intervention is needed – for instance, showing compounding bias due to flawed training data, the concentration and dependency risks in current AI value chains, and harms from the deployment of systems in sensitive uses like welfare and law enforcement.
Confronting today’s deal-making, investment-focused approach means questioning both the scale of the AI race and the assumption that ever more powerful systems are the only path forward. For example, even among alternative approaches like open-source and lightweight models, many default to building or contributing to an LLM or chatbot that serves an ill-defined purpose. What would it look like to imagine and build alternatives that serve specific, community-based needs, such as linguistic datasets and smaller models attuned to local needs and cultural nuances, or specialized models that tackle specific climate change challenges?
This is an invitation to re-visit higher-order questions, and AI’s position in broader socio-political discourse, rather than blindly accepting Silicon Valley’s vision for the future: Who is our economy for? What is technology for? And what is the life people want?
Concretize: track concepts, map infrastructures, identify failures, and build on existing community efforts
The words we use to talk about AI are shapeshifting under pressure from powerful interests. Concepts that had precise technical meanings in the past, like openness, have been diluted when transposed into new sociotechnical settings. Others that arose from histories of anticolonial struggle, like “sovereignty,” are being co-opted by the empires of AI. Tracking how these ideas have morphed and been hijacked allows us to call out “narrative arbitrage.” Faced with new varieties of AI hype, we must ask ourselves: did this term mean something different fifty, twenty, even ten years ago? Have communities that were central to these conversations been excluded over the years?
To truly evaluate claims of AI impact, we need to map the infrastructural and economic structures of AI. For all its promises to solve climate change, for example, current-day AI is intensifying a nexus between Big Tech, Big Oil and the state. Words like “democratization” are thrown around, but in practice, democratizing access to AI may just mean “[distributing] the terminals and the data extraction facilities of a centralized authority.” The details of the transnational deals that structure the global AI economy need scrutiny, too. Are global south countries, which have data, labor or raw materials but are trapped in extractive colonial dynamics, in any position to make fair deals with hyperscalers or are they giving away their “cake” for free?
Centering marginalized members of our societies – farmers, people living in rural areas, communities that speak vernacular languages – is meant to bring the benefits of AI to the people. Yet such communities have been living with AI systems and their consequences for over a decade, especially in India, where Digital Public Infrastructure projects are in advanced stages of adoption. Operating in a familiar top-down fashion, future experiments are likely to replicate the failures of the past, excluding and shifting costs onto poor, elderly, working class, women, and other marginalized communities.
An alternative vision must be grounded in the experiences and needs of such communities. Rather than building for a “normal” user, it would build for a range of different experiences. In many cases, communities are already doing this work. “Prototypes of struggle” emerge from communities serving their own members with care. They show that it is possible to build and control tech and infrastructure from below. Democratic governance, too, requires creating conditions to support and amplify the voices of a broad spectrum of community organizations.
Build collaborations and coalitions
The way forward lies in identifying, amplifying, and federating existing efforts rather than building anew from above and from scratch. This is a practical question of the complexity of social and economic structures. To build working linguistic AI systems for over two thousand African languages, for example, requires the involvement of linguists, sociologists, educators, as well as the various communities that speak a language.
Coalitions are critical if alternatives to Big Tech are to stand a chance. Prototypes of struggle are often small, resource-constrained and struggling to survive. To expand – and to flourish – they need to learn from each other, build on each other’s strengths and come together to collectively bargain. Groups building towards shared visions of the future can also share technical capacities, for example by federating and pooling compute resources among likeminded organizations.
What’s next?
The India Summit marked AI’s arrival in the Global South, forcing Big Tech to operate on a new discursive terrain. In India, as elsewhere in the Global South, in spite of spiralling inequality, states and capital still need to speak in the name of the people to justify themselves. If the AI hype we’ve all been forced to familiarize ourselves with over the last few years – AGI, superintelligence, existential risk – centers the delusions of billionaires, “impact” centers the needs of people, though weakly, stripped of context, and without the people themselves.
The strategies we’ve introduced here make people’s needs concrete and reground vague abstractions in social, economic and cultural realities. It is clear that the goods on offer don’t match what’s being promised. But we already know much of what it would take to meaningfully build towards values like democratization, sovereignty and linguistic diversity. People are already using digital technologies, including AI, to fight for their communities’ needs. The alternatives are here – they must be nurtured, defended and expanded.