To Sarah Myers West, managing director for the research center AI Now Institute, this public distrust is both appropriate and unsurprising. “I think people have learned from the past decade of tech-enabled crises,” she told me. “It’s quite clear when you look at the evidence that self-regulatory approaches don’t work.”

“Look at the very, very delicate phrasing that OpenAI uses when they make their calls for regulation,” West said. “There’s always a qualifier — like saying, ‘We want regulation for artificial general intelligence, or for models that exceed a particular threshold’ — thus excluding everything that they already have out in commercial use.” Tech companies know regulation is probably inevitable, so they support certain moves that serve to preempt bolder reform.

Aware that blind trust in the benevolence of Big Tech is not an option, West and her team at the AI Now Institute this month published a new framework called “Zero Trust AI Governance.” It’s exactly what it sounds like — a call for lawmakers to take matters into their own hands.

Crucially, that requires not making the same mistake we made in the social media era: equating tech progress with social progress. “That’s been like a foregone conclusion,” West said. “I’m not convinced that AI is associated with progress in every instance.”

For more, head here.