Related Publications [7]

A New AI Lexicon: Gender

Dec 15, 2021

A New AI Lexicon: ‘Caste’

Nov 11, 2021

A New AI Lexicon: (In)Justice

Sep 16, 2021

A New AI Lexicon: Modernity + Coloniality

Jul 28, 2021

Suspect Development Systems: Databasing Marginality & Enforcing Discipline

Jul 7, 2021

Redistribution and Rekognition: A Feminist Critique of Algorithmic Fairness

Nov 7, 2020

Gloria Conyers Hewitt

Whitewashing tech: Why the erasures of the past matter today

Oct 1, 2020

Related Press [8]

How AI reduces the world to sterotypes

“Essentially what this is doing is flattening descriptions of, say, ‘an Indian person’ or ‘a Nigerian house’ into particular stereotypes which could be viewed in a negative light,” Amba Kak, executive director of the AI Now Institute, a U.S.-based policy research organization, told Rest of World. Even stereotypes that are not inherently negative, she said, are still stereotypes: They reflect a particular value judgment, and a winnowing of diversity.

Rest of World

Oct 10, 2023

America Already Has an AI Underclass

She and others argue that tech companies intentionally downplay the importance of human feedback. Such obfuscation “sockets away some of the most unseemly elements of these technologies,” such as hateful content and misinformation that humans have to identify, Myers West told me—not to mention the conditions the people work under.

The Atlantic

Jul 26, 2023

Why AI’s diversity crisis matters, and how to tackle it

In a 2019 report from New York University’s AI Now Institute, researchers noted that more than 80% of AI professors were men. Furthermore, Black individuals made up just 2.5% of Google employees and 4% of those working at Facebook and Microsoft.

Nature

May 19, 2023

Prison Tech Comes Home

Whatever the marketing promises, ultimately landlords’, bosses’, and schools’ intrusion of surveillance technologies into the home extends the carceral state into domestic spaces. In doing so, it also reveals the mutability of surveillance and assessment technologies, and the way the same systems can play many roles, while ultimately serving the powerful.

Public Books

Aug 18, 2021

Chicago’s predictive policing program told a man he would be involved with a shooting.

Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.

The Verge

May 25, 2021

Social app Parler is cracking down on hate speech — but only on iPhones

“AI moderation is “decently good at identifying the most obviously harmful material. It’s not so great at capturing nuance, and that’s where human moderation becomes necessary,” said Sarah Myers West, a post doctorate researcher with NYU’s AI Now Institute. Making content moderation decisions is “highly skilled labor,” West said.

The Seattle Times

May 17, 2021

In scramble to respond to Covid-19, hospitals turned to models with high risk of bias

"Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted," said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview.

MedCity News

Apr 21, 2021

Twitter probes alleged racial bias in image cropping feature

"This is another in a long and weary litany of examples that show automated systems encoding racism, misogyny and histories of discrimination."

Reuters

Sep 22, 2020

Related Articles [3]