The embarrassing failure of the Paris AI Summit
Experts are sounding the alarm — but governments simply won’t listen
If there were any doubts about governments’ commitment to addressing AI risks, the Paris AI Action Summit has put them to rest — though not in the way organizers might have hoped. What was supposed to be a crucial forum for international cooperation has ended as a cautionary tale about how easily serious governance efforts can be derailed by national self-interest.
President Emmanuel Macron transformed what should have been a pivotal safety summit into a showcase for France’s tech industry. Participants were subjected to endless promotions of Mistral and other French startups, complete with nationalist rhetoric about France “being back in the AI race.” The revolving door between government and industry was on full display: Mistral was, of course, cofounded by France’s former digital minister Cédric O.
The summit’s substance suffered accordingly. The International Science of AI Safety report, meant to be the AI field’s equivalent of the IPCC’s climate assessments, was quite literally pushed into a back room — relegated to a side event while corporate executives took center stage. Summit chief Anne Bouverot’s dismissal of existential risks as “science fiction” set the tone: serious discussion of AI safety would not be welcome at this celebration of French technological prowess.
It was a stark reversal from previous summits. Bletchley had established a serious focus on catastrophic risks. Seoul secured commitments to develop concrete risk thresholds and capability evaluations. Paris was supposed to build on these promises. Instead, it ignored them entirely.
The contrast between the summit’s proceedings and concurrent events is jarring. While French officials celebrated their domestic AI champions, DeepMind’s Demis Hassabis warned that AGI could arrive within five years. As officials promoted non-binding commitments to study AI’s environmental impact, Anthropic’s Dario Amodei predicted human-level AI by 2027 (“almost certainly no later than 2030”), and criticized the summit as a “missed opportunity,” warning that “greater focus and urgency is needed ... given the pace at which the technology is progressing.”
These aren’t idle speculations. They’re warnings from the people actually building these systems, who openly admit they haven’t solved the technical challenges of making them safe — or the social challenges of how society will adapt to this new technology. On Sunday, Hassabis said “there needs to be more time spent by economists ... and philosophers and social scientists on 'what do we want the world [with AGI] to be like’”. Yet conversations of that sort were entirely absent from the Paris Summit.
The summit’s failure is most visible in its “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet”. When Transformer leaked the summit declaration last week, experts were scathing. Stuart Russell said its failure to address catastrophic risks was “negligence of an unprecedented magnitude.” Other experts agreed: the declaration was "wholly devoid of any action to address AI risk" and “inexcusably derelict”, they said.
The final statement, identical to the leaked draft, manages to be both substanceless and divisive, with both the US and UK refusing to sign it. The UK government’s explanation was damning: the declaration “didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security.” The US, meanwhile, was represented by JD Vance’s jingoistic accelerationism, with France failing to help negotiate the agreements between the US and China that are so desperately needed. It's a fitting end to a summit that prioritized aesthetics over substance.
There’s a dark irony in how this has played out. The summit’s failure to secure meaningful international cooperation on AI governance has demonstrated precisely why we need meaningful international cooperation on AI governance. Left to their own devices, countries will tend to prioritize narrow interests over collective security. As experts kept telling me in Paris, we face a tremendous “collective action” problem, which can only be solved through meaningful international agreements.
Yet the window for establishing such agreements is closing rapidly. If leading AI developers are to be believed, we have perhaps two years before the first AGI systems emerge. Yet rather than treating this with the urgency it demands, our governments spent the week doing … nothing.
We can no longer afford this pantomime of progress. The age of polite summit declarations and non-binding commitments must end. We need governments to acknowledge the reality that AI companies themselves are shouting from the rooftops: transformative AI systems are coming, probably soon, and we are woefully unprepared for their arrival.
The Paris Summit’s failure may yet serve one useful purpose: as a wake-up call. It has shown, definitively, that the current approach to AI governance is broken. The question now is whether we have time to fix it.