Google ‘retaliating against harassment protest organisers’
Meredith Whittaker, who co-signed the email, said her role had also been “changed dramatically”.
BBC
Apr 23, 2019
Google Walkout Organizers Say They Faced Retaliation
Walkout Organizers Say They Faced Retaliation The other email author, Whittaker describes how she was told she would be forced “to abandon my work on AI ethics and the AI Now Institute which I cofounded.” The two women express that they found a common theme among the stories they collected from the walkout, “People who stand up and report discrimination, abuse, and unethical conduct are punished, sidelined, and pushed out.”
Wired
Apr 23, 2019
Some AI just shouldn’t exist
One of the authors on the AI Now report, Sarah Myers West, said in a press call that such “algorithmic gaydar” systems should not be built, both because they’re based on pseudoscience and because they put LGBTQ people at risk. “The researchers say, ‘We’re just doing this because we want to show how scary these systems can be,’ but then they explain in explicit detail how you would create such a system,” she said.
Vox
Apr 19, 2019
Gender, Race, and Power in AI: A Playlist
AI Now Institute
Apr 17, 2019
The problem with AI? Study says it’s too white and male, calls for more women, minorities
Artificial intelligence technologies are developed mostly in major tech companies such as Facebook, Google, Amazon and Microsoft, and in a small number of university labs, which all tilt white, affluent and male and, in many cases, are only getting more so. Only by adding more women, people of color and other underrepresented groups can artificial intelligence address the bias and create more equitable systems, says Meredith Whittaker, a report author and co-founder of the AI Now Institute.
IMDiversity
Apr 16, 2019
The Google AI Ethics Board With Actual Power Is Still Around
“Ethical codes may deflect criticism by acknowledging that problems exist, without ceding any power to regulate or transform the way technology is developed and applied,” wrote the AI Now Institute, a research group at New York University, in a 2018 report.
Bloomberg
Apr 6, 2019
LAPD to scrap some crime data programs after criticism
Rashida Richardson, director of policy research at the AI Now Institute and a co-author of the report, said Smith’s findings mirrored suspicions that police target specific communities. “This shows a larger policing problem,” she said. “None of this is standardized. A lot of this system is one-sided.”
LA Times
Apr 5, 2019
Google’s brand-new AI ethics board is already falling apart
“The frameworks presently governing AI are not capable of ensuring accountability,” a review of AI ethical governance by the AI Now Institute concluded in November.
Vox
Apr 3, 2019
The Growing Marketplace For AI Ethics
Companies at all stages on the AI development spectrum want to know where public policy on intelligent systems is headed. The work being produced by New York–based research institute AI Now offers a road map.
Forbes
Mar 27, 2019
AI Regulation: It’s Time For Training Wheels
Most companies already designing and using artificial intelligence (AI) today recognize the need for guiding principles and a model for governance. Some are actively developing their own ethical AI policies, while others are collaborating with nonprofit organizations—such as the AI Now Institute or the Partnership on AI—grappling with the same issues.
Forbes
Mar 27, 2019
Data Quality: The Risks Of Dirty Data And AI
The AI Now Institute recently published an assertive list of some of the lowlights in AI ethics over the past year, along with a report highlighting the growing ethical risks in surveillance.
Forbes
Mar 27, 2019
4 Industries That Feel The Urgency Of AI Ethics
According to Meredith Whittaker, co-founder and co-director of the AI Now Institute at NYU, this is only the tip of the ethical iceberg. Accountability and liability are open and pressing issues that society must address as autonomous vehicles take over roadways.
Forbes
Mar 27, 2019
Our Software Is Biased Like We Are. Can New Laws Change That?
An increasingly common algorithm predicts whether parents will harm their children, basing the decision on whatever data is at hand. If a parent is low income and has used government mental-health services, that parent’s risk score goes up. But for another parent who can afford private health insurance, the data is simply unavailable. This creates an inherent (if unintended) bias against low-income parents, says Rashida Richardson, director of policy research at the nonprofit AI Now Institute, which provides feedback and relevant research to governments working on algorithmic transparency.
The Wall Street Journal
Mar 23, 2019
Google’s Approval Of $135 Million Payout To Execs Accused Of Sexual Misconduct Sparks Fresh Employee Backlash
Meredith Whittaker, a Google employee organizer and cofounder of the AI Now Institute, told Forbes that the “GooglePayOutsForAll” social media effort is meant to highlight the trade-off Google made in prioritizing payouts to Singhal and Rubin. “Imagine a world where we’re not paying sexual predators over $100 million dollars,” she said. “Where else could those resources go?”
Forbes
Mar 12, 2019
Business Leaders Set the A.I. Agenda
It’s hard to trust what you can’t see or validate. A.I. technologies aren’t visible to the vast majority of us, hidden behind corporate secrecy and integrated into back-end processes, obscured from the people they most affect.
The New York Times
Mar 3, 2019
Read This. Then Put Away Your Phone.
“People are recognizing that there are issues, and they want to change it,” Ms. Whittaker said. “But has the objective function of these corporations changed? They’re still major corporations at a time of neoliberal capitalism that are optimizing their products for shareholder value.”
The New York Times
Mar 1, 2019
Is Ethical A.I. Even Possible?
“People are recognizing there are issues, and they are recognizing they want to change them,” said Meredith Whittaker, a Google employee and the co-founder of the AI Now Institute, a research institute that examines the social implications of artificial intelligence.
The New York Times
Mar 1, 2019
A Crucial Step for Averting AI Disasters
The findings have spurred calls for closer scrutiny. Microsoft recently called on governments to regulate facial-recognition technology and to require auditing of systems for accuracy and bias. The AI Now Institute, a research group at New York University, is studying ways to reduce bias in AI systems
The New York Times
Feb 13, 2019