Here’s What Happens When an Algorithm Determines Your Work Schedule

“Worse yet, corporate secrecy is a structural characteristic of the AI industry in general that undermines our ability to assess the full extent of how the technology is being used and hinders redress, oversight, and regulation.”

Vice

Feb 24, 2020

We’re hiring a Critical Designer

The Critical Designer will scope, iterate, and develop meaningful visualizations based on AI Now’s Data Genesis research.

AI Now Institute

Feb 19, 2020

The Great Google Revolt

The discovery that their employer was working hand in hand with the Defense Department to bolster drone warfare “was a watershed moment,” said Meredith Whittaker, an A.I. researcher based in New York who led Google’s Open Research Group.

The New York Times

Feb 18, 2020

Imagining a post GDPR order: Panel at CPDP 2020

Amba Kak presented on a panel on the future of data protection law and algorithmic accountability at CPDP 2020

AI Now Institute

Feb 18, 2020

Bridging The Gender Gap In AI

The gap appears even more stark at the “FANG” companies—according to the AI Now Institute just 15% of AI research staff at Facebook and 10% at Google are women.

Forbes

Feb 17, 2020

We’re Hiring Graduate Research Assistants

AI Now is seeking Graduate Research Assistants to support the ongoing research at the Institute. These part-time positions are ideal for graduate students looking to gain exposure to the developing interdisciplinary field surrounding the social and political implications of artificial intelligence and machine learning technologies.

AI Now Institute

Feb 11, 2020

We’re Hiring Graduate Research Assistants

Positions available for spring and summer 2020 and AY 2020-21

AI Now Institute

Feb 11, 2020

Announcing AI Now’s Data Genesis Program:Studying and demystifying training data

We are thrilled to announce the official launch of the Data Genesis research program at the AI Now Institute at NYU.

AI Now Institute

Jan 30, 2020

US lawmakers concerned by accuracy of facial recognition

Ms Whittaker said corporate interest should not be allowed to "race ahead" and incorporate this technology into their systems without safeguards.

BBC

Jan 16, 2020

The Technology 202: Facial recognition gets another look on Capitol Hill today from skeptical lawmakers

Expect a divided group: Whittaker will call for policymakers and businesses to halt all use of facial recognition in sensitive social situations and political contexts, according to a press release.

The Washington Post

Jan 15, 2020

Facial Recognition Technology (Part III): Ensuring Commercial Transparency & Accuracy

Learn more about the steps Congress should take to mitigate the harmful effects of AI and facial recognition tech.

AI Now Institute

Jan 15, 2020

AI poses risks, but the White House says regulators shouldn’t “needlessly hamper” innovation

“It will take time to assess how effective these principles are in practice, and we will be watching closely,” said Rashida Richardson, the director of policy research at the AI Now Institute. “Establishing boundaries for the federal government and the private sector around AI technology will offer greater insight to those of us working in the accountability space.”

Vox

Jan 8, 2020

New York City couldn’t pry open its own black box algorithms. So now what?

But Rashida Richardson, the policy research director of AI Now, says that position isn’t nearly enough, as the role has neither the mandate nor the power to reveal what automated systems are in use.

Vox

Dec 18, 2019

Workers Need to Unionize to Protect Themselves From Algorithmic Bosses

A new report on the social implications of artificial intelligence from NYU’s A.I. Now Institute argues that people who work under algorithms in 2019—from Uber drivers to Amazon warehouse workers to even some white collar office workers who may not know that they’re being surveilled—have increasing cause for concern and collective action.

Vice

Dec 18, 2019

Make Your Job Application Robot-Proof

“Often a job candidate doesn’t even know a system is in use,” and employers aren’t required to disclose it, says Sarah Myers West, a researcher at the AI Now Institute, a New York University research group.

Wall Street Journal

Dec 16, 2019

Researchers criticize AI software that predicts emotions

A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions.

Reuters

Dec 12, 2019

Emotion-detecting tech should be restricted by law – AI Now

The AI Now Institute says the field is "built on markedly shaky foundations". Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices.

BBC

Dec 12, 2019

Community Forum on Algorithmic Bias

AI Now and several other organizations partnered with NAACP Legal Defense and Educational Fund (LDF) to host a Community forum on algorithmic bias in connection with the publication of Confronting Black Boxes: A Shadow Report on the New York City Automated Decision System Task Force.

AI Now Institute

Dec 7, 2019

  13  /  18