Read an oped on The Atlantic by Sarah Myers West and Amba Kak.

The risks posed by new technologies are not science fiction. They are real.

Much of the time, discussions about artificial intelligence are far removed from the realities of how it’s used in today’s world. Earlier this year, executives at Anthropic, Google DeepMind, OpenAI, and other AI companies declared in a joint letter that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” In the lead-up to the AI summit that he recently convened, British Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” Existential risks—or x-risks, as they’re sometimes known in AI circles—evoke blockbuster science-fiction movies and play to many people’s deepest fears.

For more, head here