
This piece is part of Reframing Impact, a collaboration between AI Now Institute, Aapti Institute, and The Maybe. In this series we bring together a wide network of advocates, builders, and thinkers from around the world to draw attention to the limitations of the current discourse around AI, and to forge the conversations we want to have.
In the run-up to the 2026 India AI Impact Summit, each piece addresses a field-defining topic in AI and governance. Composed of interview excerpts, the pieces are organized around a frame (analysis and critique of dominant narratives) and a reframe (provocations toward alternative, people-centered futures).


Audrey Tang, Taiwan’s cyber ambassador-at-large, first digital minister (2016–2024), and 2025 Right Livelihood Award laureate, is celebrated for her pioneering efforts in advancing the social use of digital technology to empower citizens, renew democracy, and heal divides.
In this conversation, Audrey Tang reflects on what it means to democratize AI today. For her, the AI Summit’s narrow framing of democratization, which is focused on expanding access to compute, is not enough. She characterizes this approach as putting humanity “into the loop of AI,” entrenching harms that today’s governance systems are incapable of dealing with. She calls instead for a more ambitious approach that “puts AI in the loop of humanity.” This broader vision calls for new forms of “plural governance” that center the broad tent of organizations that are neither state nor market. Composed of people-public-private partnerships, plural governance can defend society from AI threats and bring people together.
Following is a lightly edited transcript of the conversation.

FRAME: According to the dominant framing, democratizing AI means localizing compute. But in a colonial and monopolistic world, the sole pursuit of localizing compute can end up entrenching a form of power that fractures democracy.
Equitably distributing compute is necessary, but it can feed into digital colonialism if it is not accompanied by the redistribution of models and governance.
Equitable distribution of compute and access to technologies is a necessary condition of democratization. But treating the necessary condition as the sole focus is dangerous. If you only have local compute, it is still a form of digital colonialism. If a country has powerful servers and chips made in Taiwan, but the model, the pipeline, and the governance model are still controlled with the values of Silicon Valley or of Beijing, then you have not democratized power. You have just distributed the terminals and the data extraction facilities of a centralized authority. Actually, it might be even worse, because without decentralized compute, there’s no way for that centralized authority to collect real-time data. If you just distribute compute and you give up the local alignment of such compute, it may look like you have data sovereignty or compute sovereignty but you have given up the sovereignty of alignment.
The speed and volume of AI is overwhelming today’s democratic governance systems.
Think about democracy currently as a low-bandwidth technology. We vote once every few years—that’s a very thin stream of data. There are now a lot of ways, like deepfake scams or fraudsters, that tap into this vacuum of human coordination and try to convince people of things that are not in their best interest. Anything from organized crime to the top-down takeover of smaller weaker states through fraud are using AI to do this. We simply cannot fight such adversarial use of AI using just existing human coordination networks. It is almost a necessity to upgrade our coordinated defense using technology because the attackers are not going to stop. Organized online fraud is not going to stop if all the nations participating in the summit sign a treaty against it.

REFRAME: Democracy means putting AI into the loop of humanity. Measure its harms and build new forms of technologically supported plural governance that bring people together.
AI can augment the existing superintelligence of human society.
A frame that I prefer is: How can AI increase the bandwidth of human listening and agency? Because then it puts AI within the fabric of society, of the superintelligence that we already are, and increases human superintelligence too. Instead of putting citizens, humans, in the loop of AI, we need to take them out and put AI into the loop of humanity. Treating AI as an infrastructure for human coordination is more fast, fair, and fun than the colonial alternative.
Today’s AI is harming democracy. We need enforceable measures and benchmarks to fix this.
We should stop thinking about AI as a kind of new electricity. It’s not pure utility, distilled utility. There are a lot of pollutants—and the pollutants are not going to go away unless people start measuring them. On social media, polarization per minute can also be abbreviated as “PPM.” And if we start measuring that, as we did with the CO2 PPM [parts per million], things will be much better. What’s measured gets improved.
At the bare minimum, there needs to be undeniable common knowledge so that a country can see emerging harms and simply say, “Okay, this is committing what’s called epistemic injustice. It is colonizing our way of relating, of being. And so unless a foreign model stops causing such harm, or even better, if a foreign model can help to repair some of those harms, we’re not going to let it enter into our work stream, into our education stream, or any other stream.”
We need a very short feedback loop from local harms being detected to an undeniable benchmark, making those harms visible and unequivocally enforceable.
The plural sector is essential for democratically governing AI.
In Taiwan, what we like to call “the plural sector” is a wide spectrum from co-ops and mission-locked, mission-aligned corporations all the way to pure advocacy organizations and some spiritual organizations. It’s an extremely broad tent: anything that is not public sector run or purely for profits. We need to show leaders that empowering the plural sector to act as both auditor and red teamers in the digital economy is the only way to scale safety.
We’re rapidly looking at a patchwork takeoff, where intelligence is distributed across millions of agents. Centralized oversight is a bottleneck. No single government ministry can monitor everything. And even if it can, in a totalitarian way, monitor everything under its jurisdiction, still it doesn’t have defense in depth. The more centralized you are, the more innovators at the edge are disempowered, the more brittle it is for the entire ecosystem.
We need to move from public-private partnerships to people-first, people-public-private partnerships—4P, not just 3P. Civil society is not just for protesting against something, but rather demonstrating, showing something new. If you shift from only protesting to demonstration, then you have a distributed immune system of democracy.
AI and digital technologies can be used to bring people together.
In Taiwan, we do not talk about future extinction risk. We talk about the organized fraud that people are experiencing right now here. And instead of saying, “Let’s make a universal top-down rule to enumerate the AI risk,” we say, “Let’s use AI systems to help people cohere and agree in the here and now very quickly against fake synthetic intimacy, fraud, and other current-day issues.”
After we came together last March in an online alignment assembly, the legal drafts were published in May and everything was passed in July. Throughout this year in Taiwan, there are just no deepfake ads anymore on social media. Because the people have drawn the red line together on a large fire. They agreed to put a firewall against the synthetic fraud fire of social media companies. Now we coexist just fine with synthetic media because they are very useful. People are going to use it for short clips and things like that, but it does not lead to organized fraud in Taiwan.
When the large fire happened, we did not just say, “You know, that’s the price of progress.” Rather we said, “Let’s light some campfires.” Also using AI, also fire, but these campfires bring people together.
Watch the full conversation between Audrey Tang and Alix Dunn here.
Research Areas