Leaked: this is the AI Action Summit statement
The statement, set to be signed by countries next week, is a 'wasted opportunity', experts say — and looks unlikely to be signed by US officials
A leaked draft of the statement set to be signed by countries at next week’s Paris AI Action Summit reveals a whole lot of buzzwords — and little concrete action.
The draft statement, dated January 30th and provided to Transformer by multiple sources, is titled “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet”. (The full statement is included at the end of this article.)
The draft statement barely mentions AI risks, seemingly confirming AI experts’ fears that the Paris Summit will be a missed opportunity for world leaders to tackle AI safety. It fails to follow up on commitments countries made at previous Summits, and does not lay out any roadmap for doing so in future.
Herbie Bradley, a former employee of the UK AI Safety Institute, said that the draft “says effectively nothing except for platitudes”, noting that “it doesn't contain anything concrete around either technical AI research or government testing of AI systems”.
Its focus on “diversity”, “inclusivity” and “sustainability”, meanwhile, raises serious questions as to whether the Trump administration will sign onto the statement — an omission which would represent a huge blow to the Summit’s French organizers. “If the final draft is similar,” Bradley said, “then I think it would not be in the interests of the new US administration to sign this, and if so then the UK should strongly consider also declining to sign”.
Other experts strongly criticised the statement too. “This declaration is a wasted opportunity to build upon the historic progress made at the Bletchley and Seoul AI Safety Summits,” said Andrea Miotti, executive director of AI safety campaign group ControlAI. Miotti lambasted the statement for being “wholly devoid of any action to address AI risk”.
Another expert on AI governance, who spoke to Transformer under condition of anonymity, said that the statement needed to declare “an intention of governments to prepare for the possibility of companies being technically capable of creating AGI in the coming few years, and the global challenges this could pose”. Without doing that, they said, it would be “inexcusably derelict in its responsibilities to citizens around the world”.
The final statement could differ from this leaked draft, but expectations for substantial revisions are low. The French government did not respond to a request for comment.
Notably, the Paris statement fails to build on any of the commitments made at previous summits. At Bletchley, countries (including France) declared that AI “poses significant risks”, noting that “substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent” and committing to “building … risk-based policies” to tackle the risks.
In Seoul, companies committed to developing safety frameworks, while governments agreed to “identify thresholds at which the level of risk posed by … frontier AI models or systems would be severe absent appropriate mitigations, and to define frontier AI model or system capabilities that could pose severe risks, with the ambition of developing proposals for consideration in advance of the AI Action Summit in France”.
Discussion of such thresholds are entirely absent from the Paris statement, nor does it specify a roadmap for developing them in future.
The statement’s blindness to catastrophic risks appears to stem from the top: Summit chief Anne Bouverot recently dismissed them as “science fiction”, asserting that there is “no evidence” for them despite mounting empirical evidence for misalignment and deceptive behavior in AI systems.
Yet the statement does not meaningfully tackle even “present-day” harms from AI: while it mentions the impact of AI on energy and workers, for instance, it doesn’t go any further than setting up “observatories” to look into this further.
Nick Moës, executive director of The Future Society, called the energy observatory an “important step forward”, noting that it aligned with recommendations his organization received during a pre-summit consultation.
But Moës also stressed that more action was necessary: “To truly deliver for AI governance, we need to see stronger international efforts to guard against systemic risks and group-based harms.”
One concrete output mentioned in the statement is the establishment of a “Public Interest AI Platform and Incubator”, which will aim to create a “trustworthy AI ecosystem advancing the public interest of all”. Moës called the platform “promising”.
Politico has separately reported that the summit aims to set up an “AI Foundation”, which would be funded by governments, companies, and philanthropies.
The foundation’s flagship initiatives would include “linguistic diversity”, “trust and safety tooling”, and a “global media trust” to enable collective negotiation between media outlets and AI companies, according to a leaked slide deck obtained by Politico.
At a time when AI capabilities continue to advance at a breakneck pace, the statement and slide deck feel particularly out of touch. Last month, Anthropic CEO Dario Amodei said he expects human-level AI by 2027, while Demis Hassabis said AGI is “three to five years away”.
The statement’s failure to grapple with these imminent possibilities suggests a dangerous disconnect between policymakers and expert predictions. In failing to address the challenges potentially facing society, it may be remembered as a historic missed opportunity.
“The statement says absolutely nothing about the global security risks posed by superintelligent AI systems, at a time when top AI company CEOs are rushing towards building a technology that they themselves openly admit can end all life on Earth,” said Miotti. “We can't afford to waste any time.”
Here’s the full draft statement, dated 30/01/2025.
AI Action Summit
Co-chaired by France and India
10-11 February, 2025, Paris
Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet
Participants from XX countries, including government leaders, international organisations, representatives of civil society, the private sector, and the academic and research communities gathered in Paris on 10 and 11 February 2025 to hold the AI Action Summit. Rapid development of AI technologies represents a major paradigm shift, impacting our citizens, and societies in many ways. In line with the Paris Pact for People and the Planet, and the principles that countries must have ownership of their transition strategies, we have identified priorities and launched concrete actions to advance the public interest and to bridge digital divides through accelerating progress towards the SDGs. Our actions are grounded in three main principles of science, solutions - focusing on open AI models in compliance with countries frameworks - and policy standards, in line with international frameworks.
This Summit has highlighted the importance of reinforcing the diversity of the AI ecosystem. It has laid an open, multi-stakeholder and inclusive approach that will enable AI to be human rights based, human-centric, ethical, safe, secure and trustworthy while also stressing the need and urgency to narrow the inequalities and assist developing countries in artificial intelligence capacity-building so they can build AI capacities.
Acknowledging existing multilateral initiatives on AI, including the United Nations General Assembly Resolutions, the Global Digital Compact, the UNESCO Recommendation on Ethics of AI, the African Union Continental AI Strategy, and the works of the Organization for Economic Cooperation and Development (OECD), the council of Europe and European Union, the G7 including the Hiroshima AI Process and G20, we have affirmed the following main priorities:
Promoting AI accessibility to reduce digital divides;
Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all
Making innovation in AI thrive by enabling conditions for its development and avoiding market concentration driving industrial recovery and development
Encouraging AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth
Making AI sustainable for people and the planet
Reinforcing international cooperation to promote coordination in international governance
To deliver on these priorities:
Founding members have launched a major Public Interest AI Platform and Incubator, to support, amplify, decrease fragmentation between existing public and private initiatives on Public Interest AI and address digital divides. The Public interest AI Initiative will sustain and support digital public goods and technical assistance and capacity building projects in data, model development, openness and transparency, audit, compute, talent, financing and collaboration to support and co-create a trustworthy AI ecosystem advancing the public interest of all, for all and by all.
We have discussed, at a Summit for the first time and in a multi-stakeholder format, issues related to AI and energy. This discussion has led to sharing knowledge to foster investments for sustainable AI systems (hardware, infrastructure, models), to promoting an international discussion on AI and environment, to welcoming an observatory on the energy impact of AI with the International Energy Agency, to showcasing energy-friendly AI innovation.
We recognize the need to enhance our shared knowledge on the impacts of AI in the job market, though the creation of network of Observatories, to better anticipate AI implications for workplaces, training and education and to use AI to foster productivity, skill development, quality and working conditions and social dialogue.
We recognize the need for inclusive multistakeholder dialogues and cooperation on AI governance. We underline the need for a global reflection integrating inter alia questions of safety, sustainable development, innovation, respect of international laws including humanitarian law and human rights law and the protection of human rights, gender equality, linguistic diversity, protection of consumers and of intellectual property rights. We take notes of efforts and discussions related to international fora where AI governance is examined. As outlined in the Global Digital Compact adopted by the UN General Assembly, participants also reaffirmed their commitment to initiate a Global Dialogue on AI governance and the Independent International Scientific Panel on AI and to align on-going governance efforts, ensuring complementarity and avoiding duplication.
Harnessing the benefits of AI technologies to support our economies and societies depends on advancing Trust and Safety. We commend the role of the Bletchley Park AI Safety Summit and Seoul Summits that have been essential in progressing international cooperation on AI safety and we note the voluntary commitments launched there. We will keep addressing the risks of AI to information integrity and continue the work on AI transparency.
We look forward to next AI milestones such as the Kigali Summit, the 3rd Global Forum on the Ethics of AI hosted by Thailand and UNESCO, the 2025 World AI Conference and the AI for Good Global Summit 2025 to follow up on our commitments and continue to take concrete actions aligned with a sustainable and inclusive AI.