Google’s Approval Of $135 Million Payout To Execs Accused Of Sexual Misconduct Sparks Fresh Employee Backlash

Meredith Whittaker, a Google employee organizer and cofounder of the AI Now Institute, told Forbes that the “GooglePayOutsForAll” social media effort is meant to highlight the trade-off Google made in prioritizing payouts to Singhal and Rubin. “Imagine a world where we’re not paying sexual predators over $100 million dollars,” she said. “Where else could those resources go?”

Forbes

Mar 12, 2019

Business Leaders Set the A.I. Agenda

It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect.

The New York Times

Mar 3, 2019

Read This. Then Put Away Your Phone.

“People are recognizing that there are issues, and they want to change it,” Ms. Whittaker said. “But has the objective function of these corporations changed? They’re still major corporations at a time of neoliberal capitalism that are optimizing their products for shareholder value.”

The New York Times

Mar 1, 2019

Is Ethical A.I. Even Possible?

“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence.

The New York Times

Mar 1, 2019

A Crucial Step for Averting AI Disasters

The findings have spurred calls for closer scrutiny. Microsoft recently called on governments to regulate facial-recognition technology and to require auditing of systems for accuracy and bias. The AI Now Institute, a research group at New York University, is studying ways to reduce bias in AI systems

The New York Times

Feb 13, 2019

AI’s accountability gap

The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.

Axios

Jan 10, 2019

Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”

Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads.

The Intercept

Dec 6, 2018

Videos from the AI Now 2018 Symposium

Ethics, Organizing and Accountability

AI Now Institute

Oct 30, 2018

2018 Symposium

The AI Now 2018 Symposium addressed the intersection of AI, ethics, organizing, and accountability–examining the landmark events of the past year that have brought these topics squarely into focus. What can we learn from them and where is there more work to be done?

AI Now Institute

Oct 16, 2018

Could a bold move by John Hanckock upend the insurance industry?

But some are wary of the burden companies take on with such massive quantities of consumer information. Kate Crawford, founder of the AI Now Institute, tweeted about the pitfalls of the Vitality program after John Hancock announced its widespread implementation: false data, consumers trying to game the system, “intimate” surveillance and data breaches.

The Washington Post

Oct 4, 2018

AI Now Law and Policy Reading List

AI Now Institute

Oct 1, 2018

AI Now 2018 Symposium

This year’s program will address the intersection of AI, ethics, organizing, and accountability – with speakers including Philip Alston, Sherrilyn Ifill, Lucy Suchman, Virginia Eubanks, Kevin De Liban, Vincent Southerland, Timnit Gebru, Nicole Ozer, and Marisa Franco.

AI Now Institute

Sep 6, 2018

California just replaced cash bail with algorithms

Rashida Richardson, policy director for AI Now, a nonprofit think tank dedicated to studying the societal impact of AI, tells Quartz that she’s skeptical that this system will be less biased than its predecessors.

Quartz

Sep 4, 2018

ACLU Podcast: How to fight an algorithm

What does all this mean for our civil liberties? And how can the public exercise oversight of a secret algorithm? AI Now Co-founder Meredith Whittaker discusses this brave new world — and the ways we can keep it in check.

ACLU

Aug 2, 2018

Facial recognition gives police a powerful new tracking tool. It’s also raising alarms.

“There needs to be greater transparency around the use of these technologies,” said Rashida Richardson, director of policy research at the AI Now Institute at New York University. “And a more open, public conversation about what types of use cases we are comfortable with — and what types of use cases should just not be available.”

NBC News

Jul 30, 2018

You and AI – Machine learning, bias and implications for inequality

Cofounder Kate Crawford delivers a keynote address at the Royal Society in London as part of the "You and AI" series.

AI Now Institute

Jul 17, 2018

The New Age of Innovation: Government’s Role in Artificial Intelligence

Policy director Rashida Richardson discusses regulation of AI and its use in government on a panel alongside two members of congress and the head of the IT Industry Council.

AI Now Institute

Jul 11, 2018

Litigating Algorithms Workshop

AI Now partnered with NYU Law’s Center on Race, Inequality and the Law and the Electronic Frontier Foundation to host a first of its kind workshop that examined current United States courtroom litigation where the use of algorithms by government was central to the rights and liberties at issue in the case. Learn more by reading a summary of the workshop along with a report of our key takeaways.

AI Now Institute

Jun 22, 2018

  16  /  18