But Heidy Khlaaf, chief AI scientist at independent research group the AI Now Institute, says despite Anthropic’s safety-first reputation, it has always fallen short when it comes to its attempts to prevent human harm.

From its first safety policy, Khlaaf says, Anthropic has focused too much on the possibility of catastrophic events down the road, rather than the possibility of harm that could come from current AI technology, such as run-of-the-mill errors with chatbots.

The Claude chatbot has in the past been misused in fraud schemes and attempts to create malware and was recently used to steal Mexican government data, according to cybersecurity researchers.

She says the company is now dropping the “veneer of safety” it’s previously used to market itself because it’s become clear that’s not in its best interest.

“This is a strategic announcement to show that they’re open for business,” Khlaaf said.

Read more here.

Research Areas