New AI technology ChatGPT raising questions about human creativity

"It in no way is reflecting the depths of human understanding or human intelligence. What it's really good at doing is mimicking its form."

NBC Nightly News

Dec 22, 2022

FTC Chair Khan Brings on AI Policy Advice From NYU Researchers

"Khan announced Friday that NYU’s Meredith Whittaker, Amba Kak, and Sarah Myers West are joining an AI strategy group within the FTC’s Office of Policy Planning.

Bloomberg Law

Nov 19, 2021

AI Now Team Members on Government Detail

AI Now Institute

Nov 19, 2021

Prison Tech Comes Home

Whatever the marketing promises, ultimately landlords’, bosses’, and schools’ intrusion of surveillance technologies into the home extends the carceral state into domestic spaces. In doing so, it also reveals the mutability of surveillance and assessment technologies, and the way the same systems can play many roles, while ultimately serving the powerful.

Public Books

Aug 18, 2021

Everyone should decide how their digital data are used — not just tech companies

Smartphones, sensors and consumer habits reveal much about society. Too few people have a say in how these data are created and used.

Nature

Jul 1, 2021

Launching ‘A New AI Lexicon: Responses and Challenges to the Critical AI Discourse’

Noopur Raval and Amba Kak

Jun 22, 2021

The False Comfort of Human Oversight as an Antidote to A.I. Harm

Often expressed as the “human-in-the-loop” solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the “loop” of A.I. seems reassuring, this approach is instead “loopy” in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems.

Slate

Jun 15, 2021

Tech Companies are Training AI to Read Your Lips

“This is about actively creating a technology that can be put to harmful uses rather than identifying and mitigating vulnerabilities in existing technology,” Sarah Myers West, a researcher for the AI Now Institute, told Motherboard.

Vice

Jun 14, 2021

What Really Happened When Google Ousted Timnit Gebru

Meredith Whittaker, faculty director of New York University’s AI Now Institute, predicts that there will be a clearer split between work done at institutions like her own and work done inside tech companies. “What Google just said to anyone who wants to do this critical research is, ‘We’re not going to tolerate it,’” she says.

Wired

Jun 8, 2021

Chicago’s predictive policing program told a man he would be involved with a shooting.

Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.

The Verge

May 25, 2021

Google made AI language the centerpiece of I/O while ignoring its troubled past at the company

“Google just featured LaMDA a new large language model at I/O,” tweeted Meredith Whittaker, an AI fairness researcher and co-founder of the AI Now Institute. “This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach.”

The Verge

May 18, 2021

Social app Parler is cracking down on hate speech — but only on iPhones

“AI moderation is “decently good at identifying the most obviously harmful material. It’s not so great at capturing nuance, and that’s where human moderation becomes necessary,” said Sarah Myers West, a post doctorate researcher with NYU’s AI Now Institute. Making content moderation decisions is “highly skilled labor,” West said.

The Seattle Times

May 17, 2021

Critics raise alarm over Big Tech’s most powerful tools

"You never see these companies picking ethics over revenue," says Meredith Whittaker, a former Google engineer who now heads the AI Now Institute at New York University. "These are companies that are governed by shareholder capitalism."

Financial Times

May 16, 2021

“What on earth are they doing?”: AI ethics experts react to Google doubling embattled ethics team

“It sounds like a very desperate kind of cover-your-ass move from a company that is really unsure about how to continue to do business as usual in the face of a growing outcry around the harms and implications of their technologies.”

Tech Brew

May 14, 2021

Europe throws down gauntlet on AI with new rulebook

These are signs that the U.S. is entering a new era of regulation, said Meredith Whittaker of the AI Now Institute at New York University. "It remains to be seen how they use that power, but we are seeing at least in that agency a turn to a much more offensive stance for Big Tech," said Whittaker.

Politico

Apr 21, 2021

In scramble to respond to Covid-19, hospitals turned to models with high risk of bias

"Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted," said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview.

MedCity News

Apr 21, 2021

U.S. Lawmakers Pressure DOJ Over Funding of Predictive Policing Tools

The lawmakers pointed to a 2019 study from New York University School of Law and NYU’s AI Now Institute which discovered predictive policing systems being trained on what the researchers call "dirty data," or data derived from "corrupt, biased, and unlawful practices."...

Gizmodo

Apr 15, 2021

Ethical, Legal & Social Implications of machine learning in genomics | National Human Genome Research Institute | Varoon Mathur

AI Now Institute

Apr 13, 2021

  9  /  16