Illustration by Somnath Bhatt
An exploration of Aadhaar, India’s National Biometric ID System, and the bureaucratisation of consent
A Guest Post by Praavita Kashyap. Follow Praavita on Twitter here.
This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.
The collection of vast amounts of data is necessary to the functioning of AI and machine learning based systems. Where that data is personal data, the idea of consent, informed consent, and the redundancy of consent have become a part of debates on technology and rights.
Governments across the world are looking to the potential of AI to open new markets and drive economic growth. In 2018, the Government of India, through Niti Aayog (formerly the Planning Commission), released a discussion paper titled ‘National Strategy for Artificial Intelligence.’ This document stated that “for accelerated adoption of a highly collaborative technology like AI, the government has to play the critical role of a catalyst in supporting partnerships, providing access to infrastructure, fostering innovation through research and creating the demand by seeking solutions for addressing various governmental needs.”
As governments attempt to build AI marketplaces, the collection and use of big data becomes legitimised through a process I would call the bureaucratisation of consent which renders consent mechanical and meaningless in novel ways. The use of consent by the state as a regulatory tool only further reveals the imbalance of power in the adoption of new technologies upon a population. In this essay I explore the use of consent in the Aadhaar, India’s national biometric ID programme.
The Aadhaar project
The Aadhaar project was initiated in India in the year 2009 as a national biometric ID which would provide a Unique Identification Number (UID) to all Indian residents. This became the biggest project of personal data collection in India and was created, implemented, and funded by the State. Mass enrolment into Aadhaar was achieved when the poor were told that they would not receive legal entitlements including subsidised food rations and enrolment of children in government schools, if they did not have an Aadhaar card. People rushed to enrolment centres to submit their biometrics, even as interim orders were passed in 2013 and 2015 by the Supreme Court, in cases challenging the constitutional validity of the project stating and reiterating that Aadhaar could not be made mandatory. Despite these orders, state governments steadily proceeded to make Aadhaar mandatory for various welfare programmes, subsidies, and entitlements.
The language used to describe the project carried with it a promise — empowerment, trustworthiness, data-driven national wealth, transparency, and the end of corruption in social welfare. Instead, mass enrolment for the Unique Identification (IUD) number was achieved under duress. Coercion, fear, frustration, confusion, and exclusion pervaded the implementation of the project, especially amongst the poor and marginalised.
Around October 2015, when the Supreme Court ordered that Aadhaar can only be linked to a programme voluntarily, a check box was added to the UID enrolment form, where a resident who was enrolling would simply place a tick mark in a box denoting their consent. A 2016 circular of the National Payments Corporation of India (NPCI) stated that ‘it is mandatory for the banks to obtain the consent of the customer.’ A 2017 circular of the NPCI later stated that ‘explicit consent’ was important.
By 2016, the linking of the UID to bank accounts of workers under the National Rural Employment Act (NREGA) — which provides 180 days of guaranteed employment to persons in rural India and therefore a social security net — began to be accompanied by the collection of consent in ‘Consent Camps.’ In practice, the UID had become mandatory for workers under the Act. ‘Consent camps’ were held in various parts of India for workers to link their bank accounts with their UID numbers. The linking of the UID to worker’s bank accounts was mandatory in practice while the consent of workers to link the UID to bank accounts was to be collected in camps. The dystopian absurdity of collecting the consent of workers in government organised camps unfolded.
By March 2016, the Ministry of Rural Development issued a notification which asked all states to ensure that workers were present at bank branches to facilitate UID linking to their bank accounts. The notification states “During this exercise, the advantages of Aadhaar seeding [linking] may be explained and consent of workers for Aadhaar based payments be taken.”
By January 2017, the Union Government invoked Section 7 of the newly passed Aadhaar (Targeted Delivery of Financial and other Subsidies, Benefits and Services) Act, 2016, and made it mandatory for workers to obtain and link Aadhaar numbers by 31st March 2017 while providing other identity cards in its absence. This deadline continues to be extended. In March 2017, a notification by the Ministry of Rural Development noted “many instances of non-receipt of consent forms” and specified a Standard Operating Procedure was specified which “incorporates the process flow for obtaining, submitting and updating Aadhaar seeding [linking] consent forms of MGNREGA workers..”
A manual published by the Ministry of Rural Development in July 2017 marks out the mechanical steps that are to be taken in holding these camps. The NREGA Master Circular for the year 2020–2021 reiterates the Standard Operating Procedure while simultaneously stating that “The Aadhaar numbers of all active workers, enrolled must be seeded [linked] in the database.”
For NREGA workers, linking their bank accounts to Aadhaar is mandatory and their consent was collected in camps and forms, through mechanical standardised processes. When the state owns, implements, and creates a massive data collection project, consent is eroded and turned into a meaningless bureaucratic step which unsuccessfully attempts to provide a cover for coercion.
Regulation through Consent
Consent signifies the requirement of the acquiescence or willingness of a person to do something, be something, have something done to them or part with or gain something. Leta Jones and Edenberg write in the Oxford Handbook of Ethics of AI¹ — “As a normative concept, consent can perform the “moral magic” of transforming the moral relationship between two parties, rendering permissible otherwise impermissible actions. Yet, as a governance mechanism for achieving ethical data practices, consent has become strained — and AI has played no small part in its contentious state.” And further that “In most interpersonal consent transactions, voluntariness is a necessary component for one’s token of expressed consent to have normative force…The expressed consent is insufficient if there was no viable option otherwise.”²
Before consent became a part of data regulation, the language of informed consent dealt with protecting rights over the body and was written into the Nuremberg Code dealing with medical ethics. From consent as a means to protect individual rights over the body, consent has also been used to protect other rights. While the phrase ‘consent is broken’ has become almost ubiquitous, governments continue to rely upon consent to protect privacy and regulate new technologies. A consent requirement for processing personal data has become a part of legal, and technical frameworks that seek to regulate AI and other technologies. The European General Data Protection Regulation (GDPR) is a prime example of a law which seeks to protect “fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data,” where informed consent is a basis for lawful processing of personal data. The GDPR states that “Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment.”
Today, the Unique Identification Authority of India (UIDAI) claims that 99.5% of the population of more than a billion people have enrolled in the Aadhaar programme. Any reference to consent now frames consent as a requirement that must be obtained for a system in which everyone must be enrolled.
Over the last decade, the focus on governance in India has pivoted to easy digital solutions. The UID/Aadhaar system and the data it collects are the basis for India Stack, a made-in-India digital interface which proposes to enable “governments, businesses, start-ups and developers to utilise a unique digital infrastructure to solve India’s hard problems towards presence-less, paperless, and cashless service delivery.” The first three seek to do away with the material — people, paper, cash — enabled by the spectre of consent. Consent is thus reframed as something that “ensure(s) the free and secure movement of data, including personal biometric information and other relevant linked information.”
As India positions itself as a future leader in the field of AI, consent and the coercive collection of data will continue to haunt it. While contemporary debates on ethics in AI recognise that privacy protection has moved beyond notice and consent models and perhaps to explainability, accountability, and privacy by design, consent as a tool to render data collection ethical and therefore permissible continues to be used by governments. In the rush to create new marketplaces through the use of AI, the bureaucratisation of consent is used to collect vast quantities of data.
The bureaucratisation of consent in the deployment of state owned and operated technologies does not even provide a thin veneer of ethics and renders the moral and ethical transformative power of consent itself meaningless and even dystopian. It also prevents and obstructs the difficult work of creating state policies and laws that deepen and enhance the accountability and explainability of the technology involved and its impact on those forced to adopt it. The bureaucratisation of consent preys upon the vulnerable and obscures the cost of harvesting data by the state.
 Meg Leta Jones and Elizabeth Edenberg, “Troubleshooting Consent”, Chapter 19 in the Oxford Handbook of Ethics of AI (Eds. Markus D. Dubber, Frank Pasquale, Sunit Das)
 Meg Leta Jones and Elizabeth Edenberg, “Troubleshooting Consent”, Chapter 19 in the Oxford Handbook of Ethics of AI (Eds. Markus D. Dubber, Frank Pasquale, Sunit Das) [Pg. 368]