As A.I. Booms, Lawmakers Struggle to Understand the Technology

“The picture in Congress is bleak,” said Amba Kak, the executive director of the AI Now Institute, a nonprofit research center, who recently advised the F.T.C. “The stakes are high because these tools are used in very sensitive social domains like in hiring, housing and credit, and there is real evidence that over the years, A.I. tools have been flawed and biased.”

The New York Times

Mar 3, 2023

Sam Altman says ‘potentially scary’ AI is on the horizon. This is what keeps AI experts up at night

Sarah Myers West, Managing Director of the AI Now Institute, told Euronews Next that "in many senses, that’s already where we are," with AI systems already being used to exacerbate "longstanding patterns of inequality".

EuroNews

Feb 28, 2023

The Mimic: How will ChatGPT change everyday life? Sarah Myers West on artificial intelligence and human resilience.

West expects this latest version to affect certain industries significantly—but, she says, as uncanny as ChatGPT’s emulation of human writing is, it’s still capable of only a very small subset of functions previously requiring human intelligence. This isn’t to say it represents no meaningful problems, but to West, its future—and the future of artificial intelligence altogether—will be shaped less by the technology itself and more by what humanity does to determine its use.

The Signal

Jan 17, 2023

How AI chatbots are changing how we write and who we trust

What ChatGPT is, and how it works. Will it change how much you can trust what you read, or hear?

NPR

Jan 10, 2023

Mental health service criticised for experiment with AI chatbot

The experiment “raises significant ethical and moral concerns”, says Sarah Myers West at the AI Now Institute, a research center in New York City."

New Scientist

Jan 10, 2023

New AI technology ChatGPT raising questions about human creativity

"It in no way is reflecting the depths of human understanding or human intelligence. What it's really good at doing is mimicking its form."

NBC Nightly News

Dec 22, 2022

FTC Chair Khan Brings on AI Policy Advice From NYU Researchers

"Khan announced Friday that NYU’s Meredith Whittaker, Amba Kak, and Sarah Myers West are joining an AI strategy group within the FTC’s Office of Policy Planning.

Bloomberg Law

Nov 19, 2021

Prison Tech Comes Home

Whatever the marketing promises, ultimately landlords’, bosses’, and schools’ intrusion of surveillance technologies into the home extends the carceral state into domestic spaces. In doing so, it also reveals the mutability of surveillance and assessment technologies, and the way the same systems can play many roles, while ultimately serving the powerful.

Public Books

Aug 18, 2021

Everyone should decide how their digital data are used — not just tech companies

Smartphones, sensors and consumer habits reveal much about society. Too few people have a say in how these data are created and used.

Nature

Jul 1, 2021

The False Comfort of Human Oversight as an Antidote to A.I. Harm

Often expressed as the “human-in-the-loop” solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the “loop” of A.I. seems reassuring, this approach is instead “loopy” in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems.

Slate

Jun 15, 2021

Tech Companies are Training AI to Read Your Lips

“This is about actively creating a technology that can be put to harmful uses rather than identifying and mitigating vulnerabilities in existing technology,” Sarah Myers West, a researcher for the AI Now Institute, told Motherboard.

Vice

Jun 14, 2021

What Really Happened When Google Ousted Timnit Gebru

Meredith Whittaker, faculty director of New York University’s AI Now Institute, predicts that there will be a clearer split between work done at institutions like her own and work done inside tech companies. “What Google just said to anyone who wants to do this critical research is, ‘We’re not going to tolerate it,’” she says.

Wired

Jun 8, 2021

Chicago’s predictive policing program told a man he would be involved with a shooting.

Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.

The Verge

May 25, 2021

Google made AI language the centerpiece of I/O while ignoring its troubled past at the company

“Google just featured LaMDA a new large language model at I/O,” tweeted Meredith Whittaker, an AI fairness researcher and co-founder of the AI Now Institute. “This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach.”

The Verge

May 18, 2021

Social app Parler is cracking down on hate speech — but only on iPhones

“AI moderation is “decently good at identifying the most obviously harmful material. It’s not so great at capturing nuance, and that’s where human moderation becomes necessary,” said Sarah Myers West, a post doctorate researcher with NYU’s AI Now Institute. Making content moderation decisions is “highly skilled labor,” West said.

The Seattle Times

May 17, 2021

Critics raise alarm over Big Tech’s most powerful tools

"You never see these companies picking ethics over revenue," says Meredith Whittaker, a former Google engineer who now heads the AI Now Institute at New York University. "These are companies that are governed by shareholder capitalism."

Financial Times

May 16, 2021

“What on earth are they doing?”: AI ethics experts react to Google doubling embattled ethics team

“It sounds like a very desperate kind of cover-your-ass move from a company that is really unsure about how to continue to do business as usual in the face of a growing outcry around the harms and implications of their technologies.”

Tech Brew

May 14, 2021

In scramble to respond to Covid-19, hospitals turned to models with high risk of bias

"Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted," said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview.

MedCity News

Apr 21, 2021

  8  /  13