Everyone should decide how their digital data are used — not just tech companies

Smartphones, sensors and consumer habits reveal much about society. Too few people have a say in how these data are created and used.

Nature

Jul 1, 2021

The False Comfort of Human Oversight as an Antidote to A.I. Harm

Often expressed as the “human-in-the-loop” solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the “loop” of A.I. seems reassuring, this approach is instead “loopy” in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems.

Slate

Jun 15, 2021

Tech Companies are Training AI to Read Your Lips

“This is about actively creating a technology that can be put to harmful uses rather than identifying and mitigating vulnerabilities in existing technology,” Sarah Myers West, a researcher for the AI Now Institute, told Motherboard.

Vice

Jun 14, 2021

What Really Happened When Google Ousted Timnit Gebru

Meredith Whittaker, faculty director of New York University’s AI Now Institute, predicts that there will be a clearer split between work done at institutions like her own and work done inside tech companies. “What Google just said to anyone who wants to do this critical research is, ‘We’re not going to tolerate it,’” she says.

Wired

Jun 8, 2021

Chicago’s predictive policing program told a man he would be involved with a shooting.

Another problem — exacerbated when forecasting programs do not disclose their sources of data — is that of “dirty data” being mixed with more straightforward crime reports: a 2019 study out of New York University’s AI Now Institute identified jurisdictions where inaccurate or falsified records were directly fed into the data. Chicago’s one of them.

The Verge

May 25, 2021

Google made AI language the centerpiece of I/O while ignoring its troubled past at the company

“Google just featured LaMDA a new large language model at I/O,” tweeted Meredith Whittaker, an AI fairness researcher and co-founder of the AI Now Institute. “This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach.”

The Verge

May 18, 2021

Social app Parler is cracking down on hate speech — but only on iPhones

“AI moderation is “decently good at identifying the most obviously harmful material. It’s not so great at capturing nuance, and that’s where human moderation becomes necessary,” said Sarah Myers West, a post doctorate researcher with NYU’s AI Now Institute. Making content moderation decisions is “highly skilled labor,” West said.

The Seattle Times

May 17, 2021

Critics raise alarm over Big Tech’s most powerful tools

"You never see these companies picking ethics over revenue," says Meredith Whittaker, a former Google engineer who now heads the AI Now Institute at New York University. "These are companies that are governed by shareholder capitalism."

Financial Times

May 16, 2021

“What on earth are they doing?”: AI ethics experts react to Google doubling embattled ethics team

“It sounds like a very desperate kind of cover-your-ass move from a company that is really unsure about how to continue to do business as usual in the face of a growing outcry around the harms and implications of their technologies.”

Tech Brew

May 14, 2021

Europe throws down gauntlet on AI with new rulebook

These are signs that the U.S. is entering a new era of regulation, said Meredith Whittaker of the AI Now Institute at New York University. "It remains to be seen how they use that power, but we are seeing at least in that agency a turn to a much more offensive stance for Big Tech," said Whittaker.

Politico

Apr 21, 2021

In scramble to respond to Covid-19, hospitals turned to models with high risk of bias

"Hard decisions are being made at hospitals all the time, especially in this space, but I’m worried about algorithms being the idea of where the responsibility gets shifted," said Varoon Mathur, a technology fellow at NYU’s AI Now Institute, in a Zoom interview.

MedCity News

Apr 21, 2021

U.S. Lawmakers Pressure DOJ Over Funding of Predictive Policing Tools

The lawmakers pointed to a 2019 study from New York University School of Law and NYU’s AI Now Institute which discovered predictive policing systems being trained on what the researchers call "dirty data," or data derived from "corrupt, biased, and unlawful practices."...

Gizmodo

Apr 15, 2021

Keeping an Eye on Landlord Tech

The landlord tech industry, while alive and well prior to COVID-19, has ramped up in the past year to develop new ways to accumulate wealth at the expense of tenants.

Mar 25, 2021

Video – Whistleblower Aid & AI Now: How Tech Workers Can Blow the Whistle Workshop

Ifeoma Ozoma, Veena Dubal, Meredith Whittaker from the AI Now Institute and John Tye from Whistleblower Aid for a tech-worker focused webinar, covering the basics of safe whistleblowing and your rights as a worker.

Mar 25, 2021

Smile for the camera: the dark side of China’s emotion-recognition tech

“If anything, research and investigative reporting over the last few years have shown that sensitive personal information is particularly dangerous when in the hands of state entities, especially given the wide ambit of their possible use by state actors.”

The Guardian

Mar 3, 2021

China’s Big Tech clampdown: Why some businesses stand to benefit

“I’m not surprised that the pendulum swung one way and there was all this room for experimentation and now it’s swinging back the other way and you actually can’t do these things,” said Shazeda Ahmed, a visiting researcher at New York University’s AI Now Institute who has extensively studied the relationships between China’s tech industry and the local, provincial and central government.

Al Jazeera

Jan 26, 2021

Twitter probes alleged racial bias in image cropping feature

"This is another in a long and weary litany of examples that show automated systems encoding racism, misogyny and histories of discrimination."

Reuters

Sep 22, 2020

Questions swirl about possible racial bias in Twitter

Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute, told CNBC: “Algorithmic discrimination is a reflection of larger patterns of social inequality … it’s about much more than just bias on the part of engineers or even bias in datasets, and will require more than a shallow understanding or set of fixes.”

CNBC

Sep 21, 2020

  10  /  15