Download the full PDF here.


Leading civil society organizations Accountable Tech, AI Now Institute, and EPIC jointly released a new “Zero Trust AI Governance” framework, which offers policymakers a robust and enforceable roadmap for addressing the urgent societal risks posed by these technologies.

Rapid advances in AI, the frenzied deployment of new systems, and the surrounding hype cycle have generated a swell of excitement about AI’s potential to transform society for the better. But we are not on course to realize those rosy visions. AI’s trajectory is being dictated by a toxic arms race amongst a handful of unaccountable Big Tech companies – surveillance giants who serve as the modern gatekeepers of information, communications, and commerce.

The societal costs of this corporate battle for AI supremacy are already stacking up as companies rush unsafe systems to market – like chatbots prone to confidently spew falsehoods – recklessly integrating them into flagship products and services.

Near-term harms include turbocharging election manipulation and scams, exacerbating bias and discrimination, eroding privacy and autonomy, and many more. And additional systemic threats loom in the medium and longer terms, like steep environmental costs, large-scale workforce disruptions, and further consolidation of power by Big Tech across the digital economy.

Industry leaders have gone even further, warning of the threat of extinction as they publicly echo calls for much-needed regulation – all while privately lobbying against meaningful accountability measures and continuing to release increasingly powerful new AI systems. Given the monumental stakes, blind trust in their benevolence is not an option.

Indeed, a closer examination of the regulatory approaches they’ve embraced – namely ones that forestall action with lengthy processes, hinge on overly complex and hard-to-enforce regimes, and foist the burden of accountability onto those who have already suffered harm – informed the three overarching principles of this Zero Trust AI Governance framework:

  1. Time is of the essence – start by vigorously enforcing existing laws.
  2. Bold, easily administrable, bright-line rules are necessary.
  3. At each phase of the AI system lifecycle, the burden should be on companies to prove their systems are not harmful.

Absent swift federal action to alter the current dynamics – by vigorously enforcing laws on the books, finally passing strong federal privacy legislation and antitrust reforms, and enacting robust new AI accountability measures – the scope and severity of harms will only intensify.


If we want the future of AI to protect civil rights, advance democratic ideals, and improve people’s lives, we must fundamentally change the incentive structure.


Download the full PDF here.