A debacle at the company that built ChatGPT highlights concern that commercial forces are acting against the responsible development of artificial-intelligence systems.
“The push to retain dominance is leading to toxic competition. It’s a race to the bottom,” says Sarah Myers West, managing director of the AI Now Institute, a policy-research organization based in New York City.
OpenAI is not alone in pursuing large language models, but the release of ChatGPT probably pushed others to deployment: Google launched its chatbot Bard in March 2023, the same month that an updated version of ChatGPT, based on GPT-4, was released. West worries that products are appearing before anyone has a full understanding of their behaviour, uses and misuses, and that this could be “detrimental for society”.
West emphasizes that it’s important to focus on already-present threats from AI ahead of far-flung concerns — and to ensure that existing laws are applied to tech companies developing AI. The events at OpenAI, she says, highlight how just a few companies with the money and computing resources to feed AI wield a lot of power — something she thinks needs more scrutiny from anti-trust regulators. “Regulators for a very long time have taken a very light touch with this market,” says West. “We need to start by enforcing the laws we have right now.”
For more, head here