Illustration by Somnath Bhatt

How do we want to be recognized?

A guest post by Nina Dewi Toft Djanegara. Nina is a PhD candidate in the Department of Anthropology at Stanford University. Her research examines how technology — such as facial recognition, biometric scanners, satellites, and drones — is applied in border management and law enforcement. Twitter: @toftdjanegara

This essay is part of our ongoing “AI Lexicon” project, a call for contributions to generate alternate narratives, positionalities, and understandings to the better known and widely circulated ways of talking about AI.

This week, we co-published with Fast Company. You can find the essay on their site here.


In the last five years, facial recognition has become a battleground for the future of Artificial Intelligence (AI). This controversial technology encapsulates public fears about inescapable surveillance, algorithmic bias, and dystopian AI. Cities across the United States have banned the use of facial recognition by government agencies and prominent companies have announced moratoria on the technology’s development.

But what does it mean to be recognized? Numerous authors have sketched out the social, political and ethical implications of facial recognition technology.¹ ² ³ These important critiques highlight the consequences of false positive identifications, which have already resulted in the wrongful arrests of Black men,⁴ as well as facial recognition’s effects on privacy, civil liberties, and freedom of assembly. In this essay, however, I examine how the technology of facial recognition is intertwined with other types of social and political recognition, as well as highlight how technologists’ efforts to “diversify” and “debias” facial recognition may actually exacerbate the discriminatory effects that they seek to resolve. Within the field of computer vision, the problem of biased facial recognition has been interpreted as a call to build more inclusive datasets and models. I argue that instead, researchers should critically interrogate what can’t or shouldn’t be recognized by computer vision.

Recognition is one of the oldest problems in computer vision. For researchers in this field, recognition is a matter of detection and classification. Or, as one textbook states, “The object recognition problem can be defined as a labeling problem based on models of known objects.”⁵

When recognition is applied to people, it becomes a question of using visual attributes to determine what kind of person is depicted in an image. This is the basis for facial recognition (FR), which attempts to link a person to a previously-captured image of their face, and facial analysis (FA), which claims to recognize attributes like race,⁶ gender,⁷ ⁸ sexuality,⁹ or emotions¹⁰ based on an image of a face.

Recent advances in AI and machine learning (ML) research (e.g., convolutional neural networks and deep learning) have produced enormous gains in the technical performance of facial recognition and facial analysis models.¹¹ These performance improvements have ushered in a new era of facial recognition and its widespread application in commercial and institutional domains. Nevertheless, algorithmic audits have revealed concerning performance disparities when facial recognition and analysis tasks are conducted on different demographic groups,¹² ¹³ with lower accuracy for darker-skinned women in particular.

In response to these audits, the Fairness, Accountability, and Transparency (FAT) in machine learning community has moved to build bigger and more diverse datasets for model training and evaluation,¹⁴ ¹⁵ ¹⁶ ¹⁷ some of which include synthetic faces.¹⁸ ¹⁹ These efforts include scraping images off the Internet without the knowledge of the people depicted in those photos, leading some to point out how these projects violate ethical norms about privacy and consent.²⁰ ²¹ Other attempts to create diverse datasets have been even more troubling, for instance when Google contractors solicited facial scans from Black homeless people in Los Angeles and Atlanta who were compensated with 5-dollar Starbucks gift cards.²² Such efforts remind us that inclusion does not always entail fairness. They also raise questions about whether researchers should even be collecting more data about people who are already heavily surveilled²³ in order to build tools that can be used to further surveil them.²³ This relates to what Keeanga-Yamahtta Taylor has termed predatory inclusion, which refers to when so-called inclusive programs create more harms than benefits for marginalized people, especially Black communities.²⁵ ²⁶

Other work in the Fairness, Accountability, and Transparency community has attempted to resolve the issue of biased facial recognition and unbalanced datasets by devising new data sampling strategies that either over-sample minority demographics or under-sample the majority.²⁷ ²⁸ Yet another approach has been the creation of “bias-aware” systems that learn attributes like race and gender in order to improve model performance.²⁹ ³⁰ ³¹ These systems start by extracting demographic characteristics from an image, which are then used as explicit cues for the facial recognition task. Put simply: they first try to detect a person’s race and/or gender and then use that information to make facial recognition work better. However, none of these methods question the underlying premise that social categories like race, gender, and sexuality are fixed attributes that can be recognized based solely on visual cues — or that why automated recognition of these attributes is necessary in our society.

At the crux of this issue is the tenuous intersection between identity and appearance. For example, race is a social category that is linked but not equivalent to phenotype.³² ³³ Because race is not an objective or natural descriptor, it is impossible to definitively recognize someone’s race based on their image, and any attempts to do so can veer quickly into the realm of scientific racism.³⁴ Similarly, while the performance of gender often includes some kind of deliberate aesthetic self-presentation, it cannot be discerned by appearance alone. Visual cues can suggest membership within a social group but they do not define it.

In contrast, within the social sciences and in many activist spaces, recognition is understood as a social process that is borne out of shared histories and identities. As philosopher Georg Hegel describes it, recognition as mutual and inter-subjective; we develop and affirm our sense of identity through being recognized by other people. Moreover, social recognition is ongoing, because people are not fixed, nor are our relationships to each other.

Meanwhile, within the field of computer vision, recognition is always a one-sided visual assessment. Additionally, computer vision’s method of classification often imposes categories that are mutually exclusive — you can only belong to one — whereas from a social perspective, we regard identities as multiple and intersecting, with certain traits like gender or sexuality existing on some kind of spectrum. When facial analysis systems assign a label that contradicts a person’s self-identity, for instance when classifying a person as the wrong gender, this can be an injurious form of misrecognition.³⁵ ³⁶

In comparison, social recognition is like a nod of assurance that says I see you as you see yourself. Or, as Stuart Hall puts it, shared identity is built off the “recognition of some common origin or shared characteristics with another person or group, or with an ideal, and with the natural closure of solidarity and allegiance established on this foundation.”³⁷ Furthermore, shared identities are more than just descriptors of some pre-existing condition; they can also be cultivated, mobilized, and leveraged as powerful tools for political organizing.³⁸ When this happens, mutual recognition can form the basis for entire movements, where communities come together in solidarity to demand political recognition from the state and powerful institutions.

This kind of political solidarity was put into practice in the recent activist efforts to ban the use of facial recognition. In New Orleans, for example, the city’s facial recognition ban was achieved by a grassroots coalition of Black youth, sex workers, musicians, and Jewish Voices For Peace.³⁹ Elsewhere, campaigns have featured diverse alliances of immigrant rights and Latinx advocacy organizations, Black and Muslim activists, as well as privacy and anti-surveillance groups.⁴⁰ ⁴¹ After a wave of successful bans at the municipal level, these community activists are now pushing for legislation at the state and national levels and fighting against the use of facial recognition by federal agencies and private companies. I myself was inspired to reflect on the different meanings of identity and recognition when Noor, an LA-based anti-surveillance activist, told me, “That’s how we defeat surveillance…instead of watching each other, seeing each other.” Noor’s words helped me to understand how seeing is about mutual understanding and validation, while watching is about objectification and alienation.

Ultimately, any computer vision project is based on the premise that a person’s outsides can tell us something definitive about their insides. These are systems based solely on appearance, rather than identity, solidarity or belonging. And while facial recognition may seem futuristic, the technology is fundamentally backwards-looking, since its functioning depends on images of past selves and outmoded ways of classifying people. Looking forward, instead of asking how to make facial recognition better, perhaps the question should be: how do we want to be recognized?


References

[1] Selinger, Evan, and Woodrow Hartzog. 2020. “The Inconsentability of Facial Surveillance.” SSRN Scholarly Paper ID 3557508. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3557508.

[2] Stark, Luke. 2019. “Facial Recognition Is the Plutonium of AI.” XRDS: Crossroads, The ACM Magazine for Students 25 (3): 50–55.

[3] Crawford, Kate. 2019. “Halt the Use of Facial-Recognition Technology until It Is Regulated.” Nature 572 (7771): 565–565. https://doi.org/10.1038/d41586-019-02514-7.

[4] Hill, Kashmir. 2020. “Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match.” The New York Times, December 29, 2020, sec. Technology. https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html.

[5] Jain, Ramesh, Rangachar Kasturi, and Brian G. Schunck. 1995. Machine Vision. McGraw-Hill Series in Computer Science. New York: McGraw-Hill.

[6] Jung, Soon-gyo, Jisun An, Haewoon Kwak, Joni Salminen, and Bernard Jansen. 2018. “Assessing the Accuracy of Four Popular Face Recognition Tools for Inferring Gender, Age, and Race.” Proceedings of the International AAAI Conference on Web and Social Media 12 (1). https://ojs.aaai.org/index.php/ICWSM/article/view/15058.

[7] Akbulut, Yaman, Abdulkadir Şengür, and Sami Ekici. 2017. “Gender Recognition from Face Images with Deep Learning.” In 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), 1–4. https://doi.org/10.1109/IDAP.2017.8090181.

[8] Azzopardi, George, Antonio Greco, Alessia Saggese, and Mario Vento. 2018. “Fusion of Domain-Specific and Trainable Features for Gender Recognition From Face Images.” IEEE Access 6: 24171–83. https://doi.org/10.1109/ACCESS.2018.2823378.

[9] Wang, Yilun, and Michal Kosinski. 2018. “Deep Neural Networks Are More Accurate than Humans at Detecting Sexual Orientation from Facial Images.” Journal of Personality and Social Psychology 114 (2): 246.

[10] Ko, Byoung Chul. 2018. “A Brief Review of Facial Emotion Recognition Based on Visual Information.” Sensors 18 (2): 401. https://doi.org/10.3390/s18020401.

[11] NIST. 2018. “NIST Evaluation Shows Advance in Face Recognition Software’s Capabilities.” Text. National Institute of Standards and Technology. November 30, 2018. https://www.nist.gov/news-events/news/2018/11/nist-evaluation-shows-advance-face-recognition-softwares-capabilities.

[12] Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91. http://proceedings.mlr.press/v81/buolamwini18a.html.

[13] Grother, Patrick, Mei Ngan, and Kayee Hanaoka. 2019. “Face Recognition Vendor Test Part 3: Demographic Effects.” NIST IR 8280. Gaithersburg, MD: National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280.

[14] Kärkkäinen, Kimmo, and Jungseock Joo. 2019. “FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age.” ArXiv:1908.04913 [Cs], August. http://arxiv.org/abs/1908.04913.

[15] Merler, Michele, Nalini Ratha, Rogerio S. Feris, and John R. Smith. 2019. “Diversity in Faces.” ArXiv:1901.10436 [Cs], April. http://arxiv.org/abs/1901.10436.

[16] Robinson, Joseph P., Gennady Livitz, Yann Henon, Can Qin, Yun Fu, and Samson Timoner. 2020. “Face Recognition: Too Bias, or Not Too Bias?” ArXiv:2002.06483 [Cs], April. http://arxiv.org/abs/2002.06483.

[17] Wang, Mei, Weihong Deng, Jiani Hu, Xunqiang Tao, and Yaohai Huang. 2019. “Racial Faces In-the-Wild: Reducing Racial Bias by Information Maximization Adaptation Network.” ArXiv:1812.00194 [Cs], July. http://arxiv.org/abs/1812.00194.

[18] Kortylewski, Adam, Bernhard Egger, Andreas Schneider, Thomas Gerig, Andreas Morel-Forster, and Thomas Vetter. 2019. “Analyzing and Reducing the Damage of Dataset Bias to Face Recognition With Synthetic Data.” In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2261–68. https://doi.org/10.1109/CVPRW.2019.00279.

[19] Generated Media. 2021. “Image Datasets For Machine Learning.” Generated.Photos. 2021. https://generated.photos/datasets.

[20] Prabhu, Vinay Uday, and Abeba Birhane. 2020. “Large Image Datasets: A Pyrrhic Win for Computer Vision?” ArXiv:2006.16923 [Cs, Stat], July. http://arxiv.org/abs/2006.16923.

[21] Raji, Inioluwa Deborah, and Joy Buolamwini. 2019. “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products.” In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429–35. Honolulu HI USA: ACM. https://doi.org/10.1145/3306618.3314244.

[22] Fussell, Sidney. 2019. “How an Attempt at Correcting Bias in Tech Goes Wrong.” The Atlantic. October 9, 2019. https://www.theatlantic.com/technology/archive/2019/10/google-allegedly-used-homeless-train-pixel-phone/599668/.

[23] Browne, Simone. 2015. Dark Matters: On the Surveillance of Blackness. Durham: Duke University Press.

[24] Samudzi, Zoé. 2019. “Bots Are Terrible at Recognizing Black Faces. Let’s Keep It That Way.” The Daily Beast, February 9, 2019. https://www.thedailybeast.com/bots-are-terrible-at-recognizing-black-faces-lets-keep-it-that-way.

[25] Taylor, Keeanga-Yamahtta. 2019. Race for Profit: How Banks and the Real Estate Industry Undermined Black Homeownership. Justice, Power, and Politics. Chapel Hill: The University of North Carolina Press.

[26] Denton, Emily, Alex Hanna, Razvan Amironesei, Andrew Smart, Hilary Nicole, and Morgan Klaus Scheuerman. 2020. “Bringing the People Back In: Contesting Benchmark Machine Learning Datasets.” ArXiv:2007.07399 [Cs], July. http://arxiv.org/abs/2007.07399.

[27] Huang, Chen, Yining Li, Chen Change Loy, and Xiaoou Tang. 2019. “Deep Imbalanced Learning for Face Recognition and Attribute Prediction.” ArXiv:1806.00194 [Cs], April. http://arxiv.org/abs/1806.00194.

[28] Yin, Xi, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, and Manmohan Chandraker. 2019. “Feature Transfer Learning for Deep Face Recognition with Under-Represented Data.” ArXiv:1803.09014 [Cs], August. http://arxiv.org/abs/1803.09014.

[29] Klare, B. F., M. J. Burge, J. C. Klontz, R. W. Vorder Bruegge, and A. K. Jain. 2012. “Face Recognition Performance: Role of Demographic Information.” IEEE Transactions on Information Forensics and Security 7 (6): 1789–1801. https://doi.org/10.1109/TIFS.2012.2214212.

[30] Mahalingam, Gayathri, and Chandra Kambhamettu. 2011. “Can Discriminative Cues Aid Face Recognition across Age?” In 2011 IEEE International Conference on Automatic Face Gesture Recognition (FG), 206–12. https://doi.org/10.1109/FG.2011.5771399.

[31] Ryu, Hee Jung, Hartwig Adam, and Margaret Mitchell. 2018. “InclusiveFaceNet: Improving Face Attribute Detection with Race and Gender Diversity.” ArXiv:1712.00193 [Cs], July. http://arxiv.org/abs/1712.00193.

[32] Omi, Michael, and Howard Winant. 2015. Racial Formation in the United States. Third edition. New York: Routledge/Taylor & Francis Group.

[33] Obasogie, Osagie K. 2013. Blinded by Sight: Seeing Race through the Eyes of the Blind. Stanford, California: Stanford Law Books, an imprint of Stanford University Press.

[34] Olson, Parmy. 2020. “The Quiet Growth of Race-Detection Software Sparks Concerns Over Bias.” Wall Street Journal, August 14, 2020, sec. Life. https://www.wsj.com/articles/the-quiet-growth-of-race-detection-software-sparks-concerns-over-bias-11597378154.

[35] Keyes, Os. 2018. “The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition.” Proceedings of the ACM on Human-Computer Interaction 2 (CSCW): 88:1–88:22. https://doi.org/10.1145/3274357.

[36] Scheuerman, Morgan Klaus, Jacob M. Paul, and Jed R. Brubaker. 2019. “How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis Services.” Proceedings of the ACM on Human-Computer Interaction 3 (CSCW): 144:1–144:33. https://doi.org/10.1145/3359246.

[37] Hall, Stuart. 2011. “Introduction: Who Needs ‘Identity’?” In Questions of Cultural Identity, 1–17. London: SAGE Publications Ltd. https://doi.org/10.4135/9781446221907.

[38] Appiah, Kwame Anthony. “Opinion | What We Can Learn From the Rise and Fall of ‘Political Blackness.’” The New York Times, October 7, 2020, sec. Opinion. https://www.nytimes.com/2020/10/07/opinion/political-blackness-race.html.

[39] Sinders, Caroline. “How Musicians and Sex Workers Beat Facial Recognition in New Orleans.” https://www.vice.com/en/article/xgznka/meet-the-musicians-and-strippers-who-beat-facial-recognition-in-new-orleans.

[40] Fight for the Future. “Ban Facial Recognition.” https://www.banfacialrecognition.com.

[41] Mijente. “Take Back Tech | #NoTechForICE.” https://notechforice.com/convening/