Catholic leaders call for ethical guidelines around AI
Tech companies need binding, detailed policies that hold them accountable in addressing the many ethical concerns surrounding AI, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Feb 28, 2020
Here’s What Happens When an Algorithm Determines Your Work Schedule
“Worse yet, corporate secrecy is a structural characteristic of the AI industry in general that undermines our ability to assess the full extent of how the technology is being used and hinders redress, oversight, and regulation.”
Vice
Feb 24, 2020
The Great Google Revolt
The discovery that their employer was working hand in hand with the Defense Department to bolster drone warfare “was a watershed moment,” said Meredith Whittaker, an A.I. researcher based in New York who led Google’s Open Research Group.
The New York Times
Feb 18, 2020
Bridging The Gender Gap In AI
The gap appears even more stark at the “FANG” companies—according to the AI Now Institute just 15% of AI research staff at Facebook and 10% at Google are women.
Forbes
Feb 17, 2020
US lawmakers concerned by accuracy of facial recognition
Ms Whittaker said corporate interest should not be allowed to "race ahead" and incorporate this technology into their systems without safeguards.
BBC
Jan 16, 2020
The Technology 202: Facial recognition gets another look on Capitol Hill today from skeptical lawmakers
Expect a divided group: Whittaker will call for policymakers and businesses to halt all use of facial recognition in sensitive social situations and political contexts, according to a press release.
The Washington Post
Jan 15, 2020
AI poses risks, but the White House says regulators shouldn’t “needlessly hamper” innovation
“It will take time to assess how effective these principles are in practice, and we will be watching closely,” said Rashida Richardson, the director of policy research at the AI Now Institute. “Establishing boundaries for the federal government and the private sector around AI technology will offer greater insight to those of us working in the accountability space.”
Vox
Jan 8, 2020
New York City couldn’t pry open its own black box algorithms. So now what?
But Rashida Richardson, the policy research director of AI Now, says that position isn’t nearly enough, as the role has neither the mandate nor the power to reveal what automated systems are in use.
Vox
Dec 18, 2019
Workers Need to Unionize to Protect Themselves From Algorithmic Bosses
A new report on the social implications of artificial intelligence from NYU’s A.I. Now Institute argues that people who work under algorithms in 2019—from Uber drivers to Amazon warehouse workers to even some white collar office workers who may not know that they’re being surveilled—have increasing cause for concern and collective action.
Vice
Dec 18, 2019
Make Your Job Application Robot-Proof
“Often a job candidate doesn’t even know a system is in use,” and employers aren’t required to disclose it, says Sarah Myers West, a researcher at the AI Now Institute, a New York University research group.
Wall Street Journal
Dec 16, 2019
Emotion-detecting tech should be restricted by law – AI Now
The AI Now Institute says the field is "built on markedly shaky foundations". Despite this, systems are on sale to help vet job seekers, test criminal suspects for signs of deception, and set insurance prices.
BBC
Dec 12, 2019
Researchers criticize AI software that predicts emotions
A prominent group of researchers alarmed by the harmful social effects of artificial intelligence called Thursday for a ban on automated analysis of facial expressions in hiring and other major decisions.
Reuters
Dec 12, 2019
Activists want Congress to ban facial recognition. So they scanned lawmakers’ faces.
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
Vox
Nov 15, 2019
In AI, Diversity Is A Business Imperative
So how do we address the need for diversity and prevent bias? New York University's AI Now Institute report suggests that in addition to hiring a more diverse group of candidates, companies must be more transparent about pay and discrimination and harassment reports, among other practices, to create an atmosphere that will welcome a more diverse group of people.
Forbes
Nov 14, 2019
The Apple Card algo issue: What you need to know about A.I. in everyday life
“These systems are being trained on data that’s reflective of our wider society,” West said. “Thus, AI is going to reflect and really amplify back past forms of inequality and discrimination.”
CNBC
Nov 14, 2019
AI at work: Machines are training human workers to be more compassionate
A study published by New York University's AI Now Institute in April shows how many AI systems favor white people and males.
Tech Xplore
Aug 23, 2019
Facial recognition tech is a problem. Here’s how the Democratic candidates plan to tackle it.Bernie Sanders’s call to ban facial recognition tech for policing, explained
Even if all the technical issues were to be fixed and facial recognition tech completely de-biased, would that stop the software from harming our society when it’s deployed in the real world? Not necessarily, as a recent report from the AI Now Institute explains.
Vox
Aug 21, 2019
Flawed Algorithms Are Grading Millions of Students’ Essays
“If the immediate feedback you’re giving to a student is going to be biased, is that useful feedback? Or is that feedback that’s also going to perpetuate discrimination against certain communities?” Sarah Myers West, a postdoctoral researcher at the AI Now Institute, told Motherboard.
Vice
Aug 20, 2019