Sarah Myers West, managing director of the AI Now institute, tells The Verge she was suspicious of the licensing system proposed by many speakers. “I think the harm will be that we end up with some sort of superficial checkbox exercise, where companies say ‘yep, we’re licensed, we know what the harms are and can proceed with business as usual,’ but don’t face any real liability when these systems go wrong,” she said. 

AI Now’s West says this focus on future harms has become a common rhetorical sleight of hand among AI industry figures. These individuals “position accountability right out into the future,” she said, generally by talking about artificial general intelligence, or AGI: a hypothetical AI system smarter than humans across a range of tasks. Some experts suggest we’re getting closer to creating such systems, but this conclusion is strongly contested. 

As West says, “That’s where the conversation needs to be headed if we’re going for any type of meaningful accountability in this industry.”

For more, head here