One of the authors on the AI Now report, Sarah Myers West, said in a press call that such “algorithmic gaydar” systems should not be built, both because they’re based on pseudoscience and because they put LGBTQ people at risk. “The researchers say, ‘We’re just doing this because we want to show how scary these systems can be,’ but then they explain in explicit detail how you would create such a system,” she said.
Co-author Kate Crawford listed other problematic examples, like attempts to predict “criminality” via facial features and to assess worker competence on the basis of “micro-expressions.” Studying physical appearance as a proxy for character is reminiscent of the dark history of “race science,” she said, in particular the debunked field of phrenology that sought to derive character traits from skull shape and was invoked by white supremacists in 19th century America.
“We see these systems replicating patterns of race and gender bias in ways that may deepen and actually justify injustice,” Crawford warned, noting that facial recognition services have been shown to ascribe more negative emotions (like anger) to black people than to white people because human bias creeps into the training data.
For all these reasons, there’s a growing recognition among scholars and advocates that some biased AI systems should not be “fixed,” but abandoned. As co-author Meredith Whittaker said, “We need to look beyond technical fixes for social problems. We need to ask: Who has power? Who is harmed? Who benefits? And ultimately, who gets to decide how these tools are built and which purposes they serve?”
Read more here.