This piece is part of Reframing Impact, a collaboration between AI Now Institute, Aapti Institute, and The Maybe. In this series we bring together a wide network of advocates, builders, and thinkers from around the world to draw attention to the limitations of the current discourse around AI, and to forge the conversations we want to have.

In the run-up to the 2026 India AI Impact Summit, each piece addresses a field-defining topic in AI and governance. Composed of interview excerpts, the pieces are organized around a frame (analysis and critique of dominant narratives) and a reframe (provocations toward alternative, people-centered futures).

Usha Ramanathan is an Indian lawyer, activist, and researcher who has worked on law, poverty, and rights for the past fifteen years. She has been a leading figure in India’s privacy movement and has highlighted problems with the Aadhaar digital ID project.

In this conversation, Ramanathan unpacks the “AI for development” narrative from the longer perspective of India’s digital development journey, focusing on the Aadhaar project and its meddling with the welfare state. She tracks the continuities between these projects, showing how they have been propelled by false promises and threats of exclusion. While they have benefited the state and corporate sector, they have delivered suffering and precarity to most Indians. In place of the utopian perfection promised by promoters of AI and digital technologies, Ramanathan calls for a humanism based on struggles for freedom and self-realization.

Following is an edited transcript of the conversation.

FRAME: AI promises to revolutionize welfare and development. In reality, this familiar playbook delivers power for the state, profit for capital, and precarity for the people.

“AI for development” is sold by business interests to promote trickle-up economics—following a familiar pattern where technology is sold in the name of the poor, but profits accrue to the private sector.

For decades in India we have witnessed development processes that have tried to build an economy on the resources available to us: land, water, and minerals. This has been a battle because people live on the land. If you take that away, you displace people. We’ve seen many movements over the years against the plundering of the earth and the mass displacement of hundreds of thousands. These movements have forced changes in the law, so the state can’t just go and take what it wants. 

Business interests have sold data as the solution to this impasse of development. The chief spokesperson of the Aadhaar biometric ID project, Nandan Nilekani, once said that what we actually need is a trickle-up economy. This is a country where people are not wealthy, but they have something that can generate wealth: data. Data has become the new property. Everyone should give their data up so that it trickles up and will turn into an economy, which we call the digital economy in India today. 

When Aadhaar first began, they said it was about helping the poor get an identity so that they could access state support. Although we in civil society didn’t go into this project with skepticism, it quickly became clear that the people working on the project really didn’t know the poor at all, even though they were shooting from their shoulders. 

When we realized how Aadhaar was being promoted, we began to ask questions. In 2009, for instance, we asked questions of the convergence of data, of what would be done with the information collected about us. If private entities would have it, what would they use it for? Was it all right for personal data to become so widespread and exchanged between so many actors? 

The reason our skepticism has grown is because we still haven’t received the answers. Aadhaar is now practically mandatory in India and private entities have access to it. Creating a database and handing the data over to companies, and with no discernible protection, should worry a government concerned about the safety of the people and national security. But the project was never about welfare as much as the corporate ambition to exploit the business opportunities of India’s massive population.

The problems with these systems, which are experienced by marginalized people, are systematically obscured.

We also hypothesized that old people, working class people doing manual work, people who work with chemicals, women, and other marginalized communities would have difficulties identifying themselves in this system. Over the last fifteen years, we have seen that in fact, every one of the things that we thought would be a problem has indeed been a problem. 

While we were campaigning against Aadhaar, these procedures led to the death of a child because the parents couldn’t link Aadhaar to the Public Distribution System, India’s food welfare system. Even that didn’t move them to reconsider what they were doing.

You can only push technology for development narratives if you are completely clear that you don’t want to acknowledge the problems. Which is why you can’t afford to talk about people getting excluded. You can’t talk about data and how data is being used to track people. 

We can’t even call this an experiment because in an experiment one is trying to find out what the facts are. This is pushing an agenda, making it mandatory and saying everybody must follow it. There is no feedback loop. It was not their business to make sure the ID worked. It was up to people to make it work. And whether it worked or not, people had to keep feeding their data to the system.

I would think that the purpose of meetings like the AI Summit would be to debate amongst ourselves, to understand what position we are in and what should be done next. But it seems now like it’s only marketing all the way down.

The application of AI within the welfare state is creating new forms of precarity and eroding society.

With the use of AI in development and welfare, we are in a situation where every person is asking themselves: God, what will happen to me today? Will my bank account be shut? Will I be able to travel from one place to another? If something doesn’t work will I get my food today? Will I get the employment the state is supposed to provide me? 

This is not hypothetical. It is observational. You see people around you struggling with it and it puts fear into you. Every day you fear what may happen.

This is not a society. It is a realization of the conservative imagination that every individual has to jump through little hoops to be able to just live a fulfilling life. With digitalization, as Margaret Thatcher said, “there is no such thing as society.” That is a political project many people have been trying to make a reality for a long time.

REFRAME: We must center our ongoing human struggles for freedom and self-realization instead of falling for the false perfection promised by AI.

This idea that all of us should become servile to technology and that that servility will produce peace and calm and happiness for us—you don’t find this even in mythology. I don’t know how, in reality, we are expected to accept this. 

There is this idea that we need to burn our legacies and move on, that all the ways we used to live are useless ways to live, that whatever technology produces will be better than what we produce and what we are.

Through the centuries, people have fought for freedom and control over our own lives. For example, patriarchy says women shouldn’t feel comfortable being free. The women’s movement has been about saying, “Get off our backs; we need to get on with our lives.” We are fighting this battle all the time for the self-realization that “even if I make mistakes, let them be my own mistakes. Let them not be somebody else’s stupid mistakes.” In the middle of all this, why would I think that actually AI is the answer?


Watch the full conversation between Usha Ramanathan and Alix Dunn here.

Research Areas