“That OpenAI’s guardrails are so easily tricked illustrates why it’s particularly important to have robust pre-deployment testing of AI models before they cause substantial harm to the public,” said Sarah Meyers West, a co-executive director at AI Now, a nonprofit group that advocates for responsible and ethical AI usage.
“Companies can’t be left to do their own homework and should not be exempted from scrutiny,” she said.
Read more here.
Research Areas