Amba Kak participated as a civil society representative on Days 1 and 2 of the UK AI Safety Summit. Here are her remarks from Day 2 addressing government leaders and AI company executives:

“I want to step back and make one simple point today: we are at risk of further entrenching the dominance of a handful of private actors over our economy and our social institutions. Concentration of power is a result of the accumulation of data, compute, talent, and capital within a small number of companies who have reaped the rewards of the surveillance business model. The same business model that is today known to have caused serious privacy and safety harms to our society. And this situation was allowed to deteriorate because of the delayed action of regulators.

There’s a linearity and inevitability to the conversations we’re having about AI, even at this Summit. Larger scale equals more capability. What’s under-emphasized however is that this push to move to larger and larger scale only heightens the demand for resources that few players control–whether that’s Nvidia having a near monopoly, or it’s AWS, Google, and Microsoft on cloud, or it’s the fact that proprietary and high-quality datasets that big tech ecosystems control give them an undeniable advantage.

Outside of this room, there is momentum building on these issues: the Biden Administration has made this clear by including in its recent Executive Order that the Federal Trade Commission (FTC) should consider rulemaking to ensure fair competition in AI; the Office of Management and Budget included a mandate to consider competition concerns in federal procurement guidelines, and the Competition and Markets Authority issued a report on the competition implications of LLMs. 

Concentration of power is an urgent concern for our time– it’s bad for innovation, it’s bad for national security, and as the Chair of both the Security & Exchange Commission and the Bank of England have raised, we’re creating single points of failure that are systemic risks to the financial order. It’s deepening global inequality by leaps and bounds. But perhaps most importantly–it’s bad for democracy to have a handful of private actors with outsized influence on our economic and political institutions.

On that note: I want to call attention to the fact that I’m one of woefully few civil society voices in the room today. This conversation absolutely benefits from multiple perspectives, but it needs to be driven by those that represent public, not private, interests with direct conflicts of interest in the outcome.

I also hope that when we convene again in Korea and again in France, we will be in a very different room. One that is broader, more diverse, representative of the many affected by these systems, and most importantly: designed to contest, rather than reinforce, concentration of power.”


Before my remarks, I wanted to share some immediate reactions on the industry commitments to pre-deployment testing that the Prime Minister just announced:  First, while it is great to see global momentum towards independent testing of AI systems before release, let’s make no mistake: this voluntary consensus between companies is fragile and it will continue to be fragile until it is baked into national regulatory frameworks by lawmakers that are accountable to the public. We must always ask: Why would developers of AI models feel empowered or incentivized to course-correct or shape products in ways that might contradict the firm’s business goals and profit margins?  On independent evaluation, we need to ask: are the evaluators of these systems adequately incentivized and resourced to ensure meaningful independence? Do they have the access, transparency and legal protections needed to accomplish the task at hand? 

Which brings me to my next point: there must be real enforceable consequences for harms that are surfaced by these evaluations. AI systems must not be released widely unless they demonstrate they have sufficiently addressed risks and accounted for harms, including compliance with existing legal frameworks. And as we look to the frontier, we cannot give the present a clean chit: I’m left wondering, what happens to the AI systems that are publicly available today with known impacts on labor markets, privacy, and informational harms?

Finally, the fundamental question here is how do we define safety, and crucially, safety for whom? AI safety must be understood as more than a purely scientific endeavor to be studied in lab settings. AI systems need to be examined in the contexts in which they are used, and designed to protect the people on whom AI will be deployed. As Vice President Harris said yesterday: the risks of an elderly person getting healthcare coverage denied or being falsely imprisoned due to facial recognition are existential.