AI Vulnerabilities: An Integral Look at Security for AI

This workshop convened both practitioners from the more traditional software security world, and those developing AI systems, researchers studying AI security and vulnerability, and researchers examining the social and ethical implications of AI. AI systems can be exploited by a variety of triggers, from outright adversarial attacks, to bugs, to simple misapplication. Given the proliferation of AI systems through sensitive social domains it is important to understand not simply where and how such vulnerabilities occur (and our capacity to predict such vulnerabilities) throughout the “stack,” but what their implications might be within a given context of use. This workshop explored the link between technical research and practice examining security and vulnerability, and research looking at AI bias and fairness. Our hope was to expand the current discussion of security and bias to account for AI's technical capabilities and failings, how these might manifest in harm when exploited in practice, and what cross-sectoral efforts might be needed to contend with these risks.

AI Now Institute

Aug 23, 2019

Facial recognition tech is a problem. Here’s how the Democratic candidates plan to tackle it.Bernie Sanders’s call to ban facial recognition tech for policing, explained

Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.

Vox

Aug 21, 2019

Flawed Algorithms Are Grading Millions of Students’ Essays

“If the immediate feedback you’re giving to a student is going to be biased, is that useful feedback? Or is that feedback that’s also going to perpetuate discrimination against certain communities?” Sarah Myers West, a postdoctoral researcher at the AI Now Institute, told Motherboard.

Vice

Aug 20, 2019

The Phony Patriots of Silicon Valley

Meredith Whittaker, a co-founder of the AI Now Institute at New York University and a former Google employee, characterized the tech industry’s scaremongering about China as a tactical move meant to deflect criticism.

The New York Times

Aug 12, 2019

How Amazon will take over your house

On paper, Amazon is giving out cool stuff for free. But the company is also getting "extremely inexpensive access to record some of the most intimate parts of your life," says Meredith Whittaker, co-founder of the AI Now Institute.

Axios

Jul 31, 2019

The Racist History Behind Facial Recognition

Customers have used “affect recognition” for everything from measuring how people react to ads to helping children with autism develop social and emotional skills, but a report from the A.I. Now Institute argues that the technology is being “applied in unethical and irresponsible ways.”

The New York Times

Jul 10, 2019

This AI Software Is ‘Coaching’ Customer Service Workers. Soon It Could Be Bossing You Around, Too

Meredith Whittaker, a distinguished research scientist at New York University and co-director of the AI Now Institute, argues that Cogito could become another example of AI software that’s difficult for people to understand, but ends up having a massive impact on their lives regardless.

Time

Jul 8, 2019

Artificial Intelligence Makes Boosting ID Theft Protection Critical, House AI Task Force Chair Warns

“AI systems have evidenced a persistent pattern of gender and race-based discrimination. I have yet to encounter an AI system that was biased against white men,” Meredith Whittaker, co-founder of the AI Now Institute at New York University.

Forbes

Jun 27, 2019

Alphabet’s Annual Meeting Draws Protests on Multitude of Issues

Google engineer Irene Knapp spoke in favor of a proposal to tie executive compensation to the company’s progress on diversity and inclusion. Knapp cited research by the group AI Now showing that bias in artificial intelligence technology is related to the lack of diversity in the industry.

Bloomberg

Jun 19, 2019

Exposing the Bias Embedded in Tech

The other panel member, Meredith Whittaker, a founder and a director of the AI Now Institute at New York University, noted that voice recognition tools that rely on A.I. often don’t recognize higher-pitched voices.

The New York Times

Jun 17, 2019

Google walkout organizer leaves company after claims of retaliation

Whittaker said she was told that, in order to stay at the company, she would have to “abandon” her work on AI ethics and her role at AI Now Institute, a research center she cofounded at New York University.

ABC News

Jun 7, 2019

Advertisers want to mine your brain

"The people who are able to apply and market these technologies are large brands and corporations," says Meredith Whittaker, co-founder of the AI Now Institute. "This is not an equal-access set of technologies. They're used by some on others."

Axios

Jun 3, 2019

San Francisco banned facial recognition tech. Here’s why other cities should too.

Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a new report from the AI Now Institute explains.

Vox

May 16, 2019

Former Defense Secretary Ash Carter says AI should never have the “true autonomy” to kill

What do you do with a group of people — again, going back to regulation — who have never been regulated to start to put guardrails in place? How do you get them to stick to those or think of them as a good thing?

Vox

May 13, 2019

Cameras Everywhere: The Ethics Of Eyes In The Sky

In December AI Now reported on a subclass of facial recognition that supposedly measures your affect with claims that it can detect your true personality, your inner feelings and even your mental health based on images or video of your face. AI Now warned against using these tools for hiring or access to insurance or policing.

Forbes

May 8, 2019

Google workers protest ‘culture of retaliation’ with sit-in

Whittaker wrote that she was informed that her role at the company would be “changed dramatically,” and that she would have to abandon her outside work on artificial intelligence ethics at AI Now Institute, an organization she co-founded.

LA Times

May 1, 2019

Two women led a protest at Google. Is Google retaliating against them now?

The news that Ms. Whittaker’s role would be overhauled came right after Google disbanded its external ethics review board, and at her NYU institute and elsewhere, she has agitated for safeguards around nascent technologies such as facial recognition.

The Washington Post

Apr 27, 2019

Why Tech Billionaires Are Spending To Restrain Artificial Intelligence

A report from New York University’s AI Now Institute found that a predominantly white male coding workforce is causing bias in algorithms.

Forbes

Apr 26, 2019

  13  /  16