The problem with AI? Study says it’s too white and male, calls for more women, minorities

Artificial intelligence technologies are developed mostly in major tech companies such as Facebook, Google, Amazon and Microsoft, and in a small number of university labs, which all tilt white, affluent and male and, in many cases, are only getting more so. Only by adding more women, people of color and other underrepresented groups can artificial intelligence address the bias and create more equitable systems, says Meredith Whittaker, a report author and co-founder of the AI Now Institute.

IMDiversity

Apr 16, 2019

The Google AI Ethics Board With Actual Power Is Still Around

“Ethical codes may deflect criticism by acknowledging that problems exist, without ceding any power to regulate or transform the way technology is developed and applied,” wrote the AI Now Institute, a research group at New York University, in a 2018 report.

Bloomberg

Apr 6, 2019

LAPD to scrap some crime data programs after criticism

Rashida Richardson, director of policy research at the AI Now Institute and a co-author of the report, said Smith’s findings mirrored suspicions that police target specific communities. “This shows a larger policing problem,” she said. “None of this is standardized. A lot of this system is one-sided.”

LA Times

Apr 5, 2019

Google’s brand-new AI ethics board is already falling apart

“The frameworks presently governing AI are not capable of ensuring accountability,” a review of AI ethical governance by the AI Now Institute concluded in November.

Vox

Apr 3, 2019

AI Regulation: It’s Time For Training Wheels

Most companies already designing and using artificial intelligence (AI) today recognize the need for guiding principles and a model for governance. Some are actively developing their own ethical AI policies, while others are collaborating with nonprofit organizations—such as the AI Now Institute or the Partnership on AI—grappling with the same issues.

Forbes

Mar 27, 2019

The Growing Marketplace For AI Ethics

Companies at all stages on the AI development spectrum want to know where public policy on intelligent systems is headed. The work being produced by New York–based research institute AI Now offers a road map.

Forbes

Mar 27, 2019

Data Quality: The Risks Of Dirty Data And AI

The AI Now Institute recently published an assertive list of some of the lowlights in AI ethics over the past year, along with a report highlighting the growing ethical risks in surveillance.

Forbes

Mar 27, 2019

4 Industries That Feel The Urgency Of AI Ethics

According to Meredith Whittaker, co-founder and co-director of the AI Now Institute at NYU, this is only the tip of the ethical iceberg. Accountability and liability are open and pressing issues that society must address as autonomous vehicles take over roadways.

Forbes

Mar 27, 2019

Our Software Is Biased Like We Are. Can New Laws Change That?

An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.

The Wall Street Journal

Mar 23, 2019

Google’s Approval Of $135 Million Payout To Execs Accused Of Sexual Misconduct Sparks Fresh Employee Backlash

Meredith Whittaker, a Google employee organizer and cofounder of the AI Now Institute, told Forbes that the “GooglePayOutsForAll” social media effort is meant to highlight the trade-off Google made in prioritizing payouts to Singhal and Rubin. “Imagine a world where we’re not paying sexual predators over $100 million dollars,” she said. “Where else could those resources go?”

Forbes

Mar 12, 2019

Business Leaders Set the A.I. Agenda

It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect.

The New York Times

Mar 3, 2019

Read This. Then Put Away Your Phone.

“People are recognizing that there are issues, and they want to change it,” Ms. Whittaker said. “But has the objective function of these corporations changed? They’re still major corporations at a time of neoliberal capitalism that are optimizing their products for shareholder value.”

The New York Times

Mar 1, 2019

Is Ethical A.I. Even Possible?

“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence.

The New York Times

Mar 1, 2019

A Crucial Step for Averting AI Disasters

The findings have spurred calls for closer scrutiny. Microsoft recently called on governments to regulate facial-recognition technology and to require auditing of systems for accuracy and bias. The AI Now Institute, a research group at New York University, is studying ways to reduce bias in AI systems

The New York Times

Feb 13, 2019

AI’s accountability gap

The absence of rules of the road is in part because industry hands have cast tech regulation as troglodytic, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.

Axios

Jan 10, 2019

Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History”

Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads.

The Intercept

Dec 6, 2018

Could a bold move by John Hanckock upend the insurance industry?

But some are wary of the burden companies take on with such massive quantities of consumer information. Kate Crawford, founder of the AI Now Institute, tweeted about the pitfalls of the Vitality program after John Hancock announced its widespread implementation: false data, consumers trying to game the system, “intimate” surveillance and data breaches.

The Washington Post

Oct 4, 2018

California just replaced cash bail with algorithms

Rashida Richardson, policy director for AI Now, a nonprofit think tank dedicated to studying the societal impact of AI, tells Quartz that she’s skeptical that this system will be less biased than its predecessors.

Quartz

Sep 4, 2018

  12  /  13