Read an oped on The Atlantic by Sarah Myers West and Amba Kak.
The risks posed by new technologies are not science fiction. They are real.
Much of the time, discussions about artificial intelligence are far removed from the realities of how it’s used in today’s world. Earlier this year, executives at Anthropic, Google DeepMind, OpenAI, and other AI companies declared in a joint letter that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” In the lead-up to the AI summit that he recently convened, British Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” Existential risks—or x-risks, as they’re sometimes known in AI circles—evoke blockbuster science-fiction movies and play to many people’s deepest fears.
But AI already poses economic and physical threats—ones that disproportionately harm society’s most vulnerable people. Some individuals have been incorrectly denied health-care coverage, or kept in custody based on algorithms that purport to predict criminality. Human life is explicitly at stake in certain applications of artificial intelligence, such as AI-enabled target-selection systems like those the Israeli military has used in Gaza. In other cases, governments and corporations have used artificial intelligence to disempower members of the public and conceal their own motivations in subtle ways: in unemployment systems designed to embed austerity politics; in worker-surveillance systems meant to erode autonomy; in emotion-recognition systems that, despite being based on flawed science, guide decisions about whom to recruit and hire.
For more, head here.
Research Areas