Related Publications [2]
A New AI Lexicon: Existential Risk
Oct 8, 2021
Six Unexamined Premises Regarding Artificial Intelligence and National Security
Mar 31, 2021
Related Press [14]
OpenAI’s chief scientist helped to create ChatGPT — while worrying about AI safety
But others say that focusing on how to control yet-to-arrive AI systems distracts from the real and present dangers of the technology. It “puts intervention way off on the horizon”, says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization in New York City. Instead, she says, we need “to tackle near-term harms”, such as AI systems reinforcing biases in their training data, or potentially leaking private information.
Nature
Apr 13, 2023
500 Top Technologists and Elon Musk Demand Immediate Pause of Advanced AI Systems
"But here’s what’s key to understand about Chat GPT and other similar large language models: they’re not in any way actually reflecting the depth of understanding of human language—they’re mimicking its form.”
Gizmodo
Mar 29, 2023
Elon Musk and several other technologists call for a pause on training of AI systems.
“By focusing on hypothetical and long-term risks, it distracts from the regulation and enforcement we need in the here and now,” Myers West said.
NBC News
Mar 29, 2023
Sam Altman says ‘potentially scary’ AI is on the horizon. This is what keeps AI experts up at night
Sarah Myers West, Managing Director of the AI Now Institute, told Euronews Next that "in many senses, that’s already where we are," with AI systems already being used to exacerbate "longstanding patterns of inequality".
EuroNews
Feb 28, 2023
Mental health service criticised for experiment with AI chatbot
The experiment “raises significant ethical and moral concerns”, says Sarah Myers West at the AI Now Institute, a research center in New York City."
New Scientist
Jan 10, 2023
Critics raise alarm over Big Tech’s most powerful tools
"You never see these companies picking ethics over revenue," says Meredith Whittaker, a former Google engineer who now heads the AI Now Institute at New York University. "These are companies that are governed by shareholder capitalism."
Financial Times
May 16, 2021
“What on earth are they doing?”: AI ethics experts react to Google doubling embattled ethics team
“It sounds like a very desperate kind of cover-your-ass move from a company that is really unsure about how to continue to do business as usual in the face of a growing outcry around the harms and implications of their technologies.”
Tech Brew
May 14, 2021
U.S. Lawmakers Pressure DOJ Over Funding of Predictive Policing Tools
The lawmakers pointed to a 2019 study from New York University School of Law and NYU’s AI Now Institute which discovered predictive policing systems being trained on what the researchers call "dirty data," or data derived from "corrupt, biased, and unlawful practices."...
Gizmodo
Apr 15, 2021
AI gatekeepers are taking baby steps toward raising ethical standards
“The research community is beginning to acknowledge that we have some level of responsibility for how these systems are used,” says Inioluwa Raji, a tech fellow at NYU’s AI Now Institute.
Quartz
Jun 26, 2020
Catholic leaders call for ethical guidelines around AI
Tech companies need binding, detailed policies that hold them accountable in addressing the many ethical concerns surrounding AI, says Meredith Whittaker, co-founder of the AI Now Institute at New York University.
Axios
Feb 28, 2020
AI poses risks, but the White House says regulators shouldn’t “needlessly hamper” innovation
“It will take time to assess how effective these principles are in practice, and we will be watching closely,” said Rashida Richardson, the director of policy research at the AI Now Institute. “Establishing boundaries for the federal government and the private sector around AI technology will offer greater insight to those of us working in the accountability space.”
Vox
Jan 8, 2020
Former Defense Secretary Ash Carter says AI should never have the “true autonomy” to kill
What do you do with a group of people — again, going back to regulation — who have never been regulated to start to put guardrails in place? How do you get them to stick to those or think of them as a good thing?
Vox
May 13, 2019
Why Tech Billionaires Are Spending To Restrain Artificial Intelligence
A report from New York University’s AI Now Institute found that a predominantly white male coding workforce is causing bias in algorithms.
Forbes
Apr 26, 2019
A Crucial Step for Averting AI Disasters
The findings have spurred calls for closer scrutiny. Microsoft recently called on governments to regulate facial-recognition technology and to require auditing of systems for accuracy and bias. The AI Now Institute, a research group at New York University, is studying ways to reduce bias in AI systems
The New York Times
Feb 13, 2019