Fake Data Could Help Solve Machine Learning’s Bias Problem—if We Let It
“That process of creating a synthetic data set, depending on what you’re extrapolating from and how you’re doing that, can actually exacerbate the biases,” says Deb Raji, a technology fellow at the AI Now Institute.
Slate
Sep 17, 2020
‘Encoding the same biases’: Artificial intelligence’s limitations in coronavirus response
"We were seeing AI being used extensively before Covid-19, and during Covid-19 you're seeing an increase in the use of some types of tools," noted Meredith Whittaker, a distinguished research scientist at New York University in the US and co-founder of AI Now Institute, which carries out research examining the social implications of AI.
Horizon
Sep 7, 2020
AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute
In a new report called “Regulating Biometrics: Global Approaches and Urgent Questions,” the AI Now Institute says regulation advocates are beginning to believe a biometric surveillance state is not inevitable.
VentureBeat
Sep 4, 2020
Property tech companies are helping landlords spy on residents, collect their data, and even evict them. Critics are calling it an invasion of privacy that could reinforce inequality.
"It clearly seems to be a racist way of saying: 'Look through your tenants who you don't want to live here and replace them with tenants who you do,'" Erin McElroy, a researcher at the AI Now Institute and cofounder of the Anti-Eviction Mapping Project, told Business Insider.
Business Insider
Sep 3, 2020
All Is Fair In Mortgage Lending And Fintech?
As Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute explained to CBS News, “We turn to machine learning in the hopes that they'll be more objective, but really what they're doing is reflecting and amplifying historical patterns of discrimination and often in ways that are harder to see.”
Forbes
Aug 28, 2020
Skewed Grading Algorithms Fuel Backlash Beyond the Classroom
Inioluwa Deborah Raji, a fellow at NYU’s AI Now Institute, which works on algorithmic fairness, says people reaching for a technical solution often embrace statistical formulas too tightly. Even well-supported pushback is perceived as highlighting a need for small fixes, rather than reconsidering whether the system is fit for the purpose.
Wired
Aug 19, 2020
Predictive policing algorithms are racist. They need to be dismantled.
Police like the idea of tools that give them a heads-up and allow them to intervene early because they think it keeps crime rates down, says Rashida Richardson, director of policy research at the AI Now Institute.
MIT Technology Review
Jul 17, 2020
AI gatekeepers are taking baby steps toward raising ethical standards
“The research community is beginning to acknowledge that we have some level of responsibility for how these systems are used,” says Inioluwa Raji, a tech fellow at NYU’s AI Now Institute.
Quartz
Jun 26, 2020
Big Tech juggles ethical pledges on facial recognition with corporate interests
"It shows that organizing and socially informed research works," said Meredith Whittaker, co-founder of the AI Now Institute, which researches the social implications of artificial intelligence. "But do I think the companies have really had a change of heart and will work to dismantle racist oppressive systems? No."
NBC News
Jun 22, 2020
How Companies Should Answer The Call For Responsible AI
A recent report from the AI Now Institute revealed that 80% of AI professors, 85% of AI research staff at Facebook, and 90% of AI employees at Google are male.
Forbes
Feb 28, 2020
Catholic leaders call for ethical guidelines around AI
Tech companies need binding, detailed policies that hold them accountable in addressing the many ethical concerns surrounding AI, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Feb 28, 2020
Here’s What Happens When an Algorithm Determines Your Work Schedule
“Worse yet, corporate secrecy is a structural characteristic of the AI industry in general that undermines our ability to assess the full extent of how the technology is being used and hinders redress, oversight, and regulation.”
Vice
Feb 24, 2020
The Great Google Revolt
The discovery that their employer was working hand in hand with the Defense Department to bolster drone warfare “was a watershed moment,” said Meredith Whittaker, an A.I. researcher based in New York who led Google’s Open Research Group.
The New York Times
Feb 18, 2020
Bridging The Gender Gap In AI
The gap appears even more stark at the “FANG” companies—according to the AI Now Institute just 15% of AI research staff at Facebook and 10% at Google are women.
Forbes
Feb 17, 2020
US lawmakers concerned by accuracy of facial recognition
Ms Whittaker said corporate interest should not be allowed to "race ahead" and incorporate this technology into their systems without safeguards.
BBC
Jan 16, 2020
The Technology 202: Facial recognition gets another look on Capitol Hill today from skeptical lawmakers
Expect a divided group: Whittaker will call for policymakers and businesses to halt all use of facial recognition in sensitive social situations and political contexts, according to a press release.
The Washington Post
Jan 15, 2020
AI poses risks, but the White House says regulators shouldn’t “needlessly hamper” innovation
“It will take time to assess how effective these principles are in practice, and we will be watching closely,” said Rashida Richardson, the director of policy research at the AI Now Institute. “Establishing boundaries for the federal government and the private sector around AI technology will offer greater insight to those of us working in the accountability space.”
Vox
Jan 8, 2020
New York City couldn’t pry open its own black box algorithms. So now what?
But Rashida Richardson, the policy research director of AI Now, says that position isn’t nearly enough, as the role has neither the mandate nor the power to reveal what automated systems are in use.
Vox
Dec 18, 2019