Why AI Needs Ethics from Below

Illustration by Somnath Bhatt

A guest post by Julian Posada. Julian is a Ph.D. Candidate at the Faculty of Information of the University of Toronto. His dissertation, Unsustainable Data Work, investigates the experiences of workers in Latin America who annotate data for artificial intelligence through digital labour platforms. Twitter: @JulianPosada0

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.


Every day, Melba wakes up at 4:00 AM in Venezuela to annotate data for machine learning algorithms. She is one of the many workers worldwide who log into crowdsourcing platforms to perform tasks for technology companies developing artificial intelligence. She can earn between fifteen to twenty American dollars per week working full time, enough to supplement her pension that, because of ongoing hyperinflation, is now worth a dollar per month: “not enough to buy half a dozen eggs; not enough to buy a piece of cheese or bread.”

Melba and many other Venezuelans have turned to work for online platforms because of the dire economic and COVID-19 situation. While these jobs provide low barriers to entry and a steady income in dollars, they do not provide secure employment: the workers are considered freelancers and are called raters, labelers, or annotators by these companies. As part of my dissertation research, I investigated the experiences of these workers throughout Latin America who annotate data for AI through digital labor platforms. Through this research, I encountered Melba and many others whose stories I include below. I got in touch with them through worker groups on social media and we talked online since the pandemic started. I reached out to them specifically because Venezuelans constitute the majority of AI platform workers in Latin America and, as a Latino myself, I wanted to hear about their experiences.

What I found overall was that this type of platform labor is exploitative, but also designed to erase workers’ voices. This essay explores the working conditions of Venezuelan platform workers and their relationship with the AI they help create. I argue that labor is an underexplored and under researched area in AI. As critical AI scholars, we cannot state what is “ethical,” “beneficial,” or “responsible” without respecting the rights and potential of the humans that make this technology possible through their work.

To companies that run these platforms — often based in countries like the United States, Germany, and Australia — contractors are invisible, cheap, and receive little recognition. Yet these ‘ghost workers’ have fueled many of the recent advances in artificial intelligence. Machine learning developers often consider them a necessary burden because of the need for human labor to annotate data combined with the fear of “worker subjectivity” becoming embedded as biases in the datasets and, ultimately, the algorithms. This vision from above fails to see that human labor is not a cost but an asset. Data are never neutral, and therefore, data are never unbiased.

From the perspective of the worker, these platforms present an opportunity to survive the economic crisis in Venezuela. For Juan, this platform allowed him to work in the “the comfort of my home, working at my own pace, and being next to my family.” Like many others, he started working for the platform because a close family member recommended it. Then he had to learn. Tasks are usually instructed in English, which he doesn’t understand, so he has to rely on services like Google Translate to read the documents. Then he needed to practice the different tasks, ranging from the segmentation of images to their categorization. These tasks can take a single click to several hours, depending on their complexity. When he started, he was making less than ten dollars per week, but with practice and experience, he increased his quality and speed (both measured by the platform) to the point where he was earning forty dollars per week, and even seventy dollars one time. Then he was banned, and the platform refused to pay him two weeks of wages. He never knew why, and he couldn’t ask because there was no recourse for him.

Like Juan, many other workers constantly face the threat of being banned or fired. Platforms always remind workers of this risk in their instructions and interfaces, and often subject them to accuracy exercises with high stakes: If they fail, they get expelled from the task. In the case of Juan’s platform, there are open channels available between managers and workers through Discord, but these are heavily moderated, and voicing any concern could get the worker expelled. This happened to Roberto, a Black worker whose identity could not be verified by the platform’s facial recognition system and was removed from the Discord server after asking for help. “I was astonished. I didn’t do anything; I was expelled for asking a question! […] Moderators are really harsh, the easiest thing for them is to expel you instead of answering your questions.”

In this heavily designed AI labor system, workers are considered less worthy than the robots they help train: they are cheap, unreliable, and disposable. From the platform interfaces to the instructions for how to use them, everything is made to constrain workers’ judgment and reduce their labor process to a single click; nothing else seems necessary from above. These workers, however, are the silenced voices of humanity in AI creation. For years, scholars and policymakers have debated what ethical principles should govern AI without asking those who will be most affected by its deployment and development. We constantly ask how to make AI “align” with human values and understand the plurality of perspectives on this planet. These questions are essential, but we cannot find any answers if we continue to look from above and design systems that disregard the rights and voices of outsourced workers. Recognizing the work of annotators, respecting their fundamental rights as workers, and incorporating their unique regard for the systems they are helping to construct are overlooked but fundamental steps in ensuring that artificial intelligence benefits humanity.

For example, platforms rarely inform workers of the uses of the data they annotate. Some infer that it is destined to train AI, but they rarely receive any details beyond the annotation instructions. Some workers, however, inquire, especially when the tasks are perceived as “creepy,” “worrisome,” or unethical. One of the most cited tasks is the annotation of objects in house interiors. All the images seemed to be taken from the floor and recorded in different rooms. “I think these images were taken without the consent of the owners,” commented Roberto. “It was very strange to me and on many occasions, there were naked people in the images […] you would see people changing their clothes or naked in their bathrooms […] I questioned what I was doing, but I never asked because, as I told you, you can risk being expelled for anything.” Workers also discussed this task on a platform forum before being silenced by a moderator:

Worker A: When I wonder where they get these pictures from I freak out

Worker B: Do you know the purpose of [these] tasks? I myself don’t know why I’m doing this but maybe it’d be nice to know

Worker C: To teach the robot. AI learns from humans

Worker A: These people gave their consent to have pictures of their household taken and examined by internet strangers? because…the picture of the naked woman…

Worker C: If robots ever dominate the world… that’s on us

Worker A: I know haha I’m legitimately worried […] pretty creepy the things we do for pennies…

Worker D: I think they are aware that the robot can take pictures but I don’t think they know that there are humans looking at them. It’s probably in the terms and conditions no one ever reads

Moderator: Ok guys this is strict. You should not make this public. This is a project, ok. You can read [the platform] rules. Thank you.

This example shows that while workers currently do not have a say in how artificial intelligence is developed, they have unique perspectives for their work. Platforms know this and not only threaten to expel workers but also bind them to secrecy and complacency. In one of the introduction posts for the platform workers for a major tech company, one of the frequently asked questions included “I got a task asking for something illegal. What do I do?” The post answered the following:

Drop your morals at the door. This is the interwebs. You will see tasks with porn fetishes, torrent downloading, file sharing, etc. We are not the internet police and annotate these just as we would any other task. If you are truly uncomfortable with the content of a task, you can release it.

These testimonies from crowdsourcing show that, while the AI ethics community has adopted principles of fairness, transparency, and responsibility, mainly from the Global North, they have failed to address these intricate and exploitative labor and data supply chains that suppress marginalized voices. We need ethics from below, and to incorporate the voices of marginalized populations, including outsourced workers, to ensure that these systems do not benefit some groups at the expense of many others, and to transcend the limits of exclusively technical solutions.

In the case of workers, this cannot be possible without empowering them, recognizing their labor’s value, and guaranteeing worker rights. There cannot be “fairness” in AI without ensuring that fair work principles are respected. “Transparency” cannot be achieved when hundreds of workers are excluded and silenced. AI cannot be “responsible” without caring for the livelihoods of the people in developing countries who make the technology possible. Protecting workers and their communities, their rights, and their perspectives on the technologies that they help create remain essential steps to justice and co-liberation in artificial intelligence development.