China’s Big Tech clampdown: Why some businesses stand to benefit
“I’m not surprised that the pendulum swung one way and there was all this room for experimentation and now it’s swinging back the other way and you actually can’t do these things,” said Shazeda Ahmed, a visiting researcher at New York University’s AI Now Institute who has extensively studied the relationships between China’s tech industry and the local, provincial and central government.
Al Jazeera
Jan 26, 2021
Twitter probes alleged racial bias in image cropping feature
"This is another in a long and weary litany of examples that show automated systems encoding racism, misogyny and histories of discrimination."
Reuters
Sep 22, 2020
Questions swirl about possible racial bias in Twitter
Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute, told CNBC: “Algorithmic discrimination is a reflection of larger patterns of social inequality … it’s about much more than just bias on the part of engineers or even bias in datasets, and will require more than a shallow understanding or set of fixes.”
CNBC
Sep 21, 2020
Fake Data Could Help Solve Machine Learning’s Bias Problem—if We Let It
“That process of creating a synthetic data set, depending on what you’re extrapolating from and how you’re doing that, can actually exacerbate the biases,” says Deb Raji, a technology fellow at the AI Now Institute.
Slate
Sep 17, 2020
‘Encoding the same biases’: Artificial intelligence’s limitations in coronavirus response
"We were seeing AI being used extensively before Covid-19, and during Covid-19 you're seeing an increase in the use of some types of tools," noted Meredith Whittaker, a distinguished research scientist at New York University in the US and co-founder of AI Now Institute, which carries out research examining the social implications of AI.
Horizon
Sep 7, 2020
AI Weekly: A biometric surveillance state is not inevitable, says AI Now Institute
In a new report called “Regulating Biometrics: Global Approaches and Urgent Questions,” the AI Now Institute says regulation advocates are beginning to believe a biometric surveillance state is not inevitable.
VentureBeat
Sep 4, 2020
Property tech companies are helping landlords spy on residents, collect their data, and even evict them. Critics are calling it an invasion of privacy that could reinforce inequality.
"It clearly seems to be a racist way of saying: 'Look through your tenants who you don't want to live here and replace them with tenants who you do,'" Erin McElroy, a researcher at the AI Now Institute and cofounder of the Anti-Eviction Mapping Project, told Business Insider.
Business Insider
Sep 3, 2020
All Is Fair In Mortgage Lending And Fintech?
As Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute explained to CBS News, “We turn to machine learning in the hopes that they'll be more objective, but really what they're doing is reflecting and amplifying historical patterns of discrimination and often in ways that are harder to see.”
Forbes
Aug 28, 2020
Skewed Grading Algorithms Fuel Backlash Beyond the Classroom
Inioluwa Deborah Raji, a fellow at NYU’s AI Now Institute, which works on algorithmic fairness, says people reaching for a technical solution often embrace statistical formulas too tightly. Even well-supported pushback is perceived as highlighting a need for small fixes, rather than reconsidering whether the system is fit for the purpose.
Wired
Aug 19, 2020
Predictive policing algorithms are racist. They need to be dismantled.
Police like the idea of tools that give them a heads-up and allow them to intervene early because they think it keeps crime rates down, says Rashida Richardson, director of policy research at the AI Now Institute.
MIT Technology Review
Jul 17, 2020
AI gatekeepers are taking baby steps toward raising ethical standards
“The research community is beginning to acknowledge that we have some level of responsibility for how these systems are used,” says Inioluwa Raji, a tech fellow at NYU’s AI Now Institute.
Quartz
Jun 26, 2020
Big Tech juggles ethical pledges on facial recognition with corporate interests
"It shows that organizing and socially informed research works," said Meredith Whittaker, co-founder of the AI Now Institute, which researches the social implications of artificial intelligence. "But do I think the companies have really had a change of heart and will work to dismantle racist oppressive systems? No."
NBC News
Jun 22, 2020
How Companies Should Answer The Call For Responsible AI
A recent report from the AI Now Institute revealed that 80% of AI professors, 85% of AI research staff at Facebook, and 90% of AI employees at Google are male.
Forbes
Feb 28, 2020
Catholic leaders call for ethical guidelines around AI
Tech companies need binding, detailed policies that hold them accountable in addressing the many ethical concerns surrounding AI, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Feb 28, 2020
Here’s What Happens When an Algorithm Determines Your Work Schedule
“Worse yet, corporate secrecy is a structural characteristic of the AI industry in general that undermines our ability to assess the full extent of how the technology is being used and hinders redress, oversight, and regulation.”
Vice
Feb 24, 2020
The Great Google Revolt
The discovery that their employer was working hand in hand with the Defense Department to bolster drone warfare “was a watershed moment,” said Meredith Whittaker, an A.I. researcher based in New York who led Google’s Open Research Group.
The New York Times
Feb 18, 2020
Bridging The Gender Gap In AI
The gap appears even more stark at the “FANG” companies—according to the AI Now Institute just 15% of AI research staff at Facebook and 10% at Google are women.
Forbes
Feb 17, 2020
US lawmakers concerned by accuracy of facial recognition
Ms Whittaker said corporate interest should not be allowed to "race ahead" and incorporate this technology into their systems without safeguards.
BBC
Jan 16, 2020