What AI companies want from Trump
Transformer Weekly: AI Action Plan responses, new EU COP draft, and preparing for an intelligence explosion
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
As the deadline for the AI Action Plan RFI draws closer, a bunch of AI companies published their comments.
Anthropic’s is about what you’d expect — emphasizing the risks and encouraging the government to take some steps towards mitigating them, though nothing particularly stringent. Some safety-relevant highlights:
“We expect powerful AI systems will emerge in late 2026 or early 2027”, defined as having “intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines” and “the ability to autonomously reason through complex tasks over extended periods”.
“The federal government must develop robust capabilities to rapidly assess any powerful AI system, foreign or domestic, for potential national security uses and misuses.”
“we believe if there is evidence that AI systems pose critical national security risks then developers like Anthropic should be required to test their systems for these risks”
“We strongly recommend the administration strengthen export controls on computational resources and implement appropriate export restrictions on certain model weights,” including expanding export controls to H20 chips and reducing the 1,700 H100 no-license required threshold for Tier 2 countries in the diffusion rule.
“The federal government should partner with industry leaders to substantially enhance security protocols at frontier AI laboratories to prevent adversarial misuse and abuse of powerful AI technologies.”
Google’s is weaker. It starts off by claiming that “for too long, AI policymaking has paid disproportionate attention to the risks, often ignoring the costs that misguided regulation can have”, and goes on to call for:
“Championing market-driven and widely adopted technical standards and security protocols for frontier models”.
“Working with industry and aligned countries to develop tailored protocols and standards to identify and address potential national security risks of frontier AI systems”
“supporting federal preemption of state-level laws that affect frontier AI models … [which] would ensure a unified national framework for frontier AI models focused on protecting national security while fostering an environment where American AI innovation can thrive.”
It’s not clear what Google thinks this framework should look like, though — and it seems a lot will depend on this.
There are some anti-regulatory buzzwords in here: it says “government regulation should be focused on specific applications”, and that developers should not “bear responsibility for misuse by customers or end users”.
It also says the government should “avoid overbroad disclosure requirements” (such as those “contemplated in the EU”).
And the company also says the US should “[engage] foreign governments to deter efforts to impose measures that restrict AI development and deployment by US and local companies”
This echoed similar language from a consortium of big business groups, including the Chamber of Commerce, who said this week that countries including the EU are “implementing divergent and non-risk-based regulatory frameworks that make it more onerous and costly to develop and deploy AI”.
But it does say that “for the most capable frontier AI systems, the Administration should identify potential capabilities that could raise national security risks and work with industry to develop and promote standardized industry protocols, secure data-sharing, standards, and safeguards.”
“The Department of Commerce and NIST can lead on: (1) creating voluntary technical evaluations for major AI risks; (2) developing guidelines for responsible scaling and security protocols; (3) researching and developing safety benchmarks and mitigations (like tamper-proofing); and (4) assisting in building a private-sector AI evaluation ecosystem.”
It also calls on the government to “[promulgate] an international norm of ‘home government’ testing—wherein providers of AI with national security-critical capabilities are able to demonstrate collaboration with their home government on narrowly targeted, scientifically rigorous assessments that provide ‘test once, run everywhere’ assurance”.
And Google pushed back on the diffusion rule, arguing that it “may undermine economic competitiveness goals … by imposing disproportionate burdens on US cloud service providers”.
OpenAI’s, meanwhile, has to be seen to be believed. The content manages to be even weaker than Google’s, and the tone is utterly shameless — failing to mention AI risks once in all of Chris Lehane’s five page intro, stoking “race with China” vibes, and pushing back against all regulation. Some of the most notable asks:
OpenAI thinks companies should get “liability protections including preemption from state-based regulations that focus on frontier model security”, in exchange for voluntarily working with the federal government on national security risks.
The voluntary nature of this is repeatedly emphasised in the document — OpenAI is saying it does not want mandatory testing, evaluations, or safety standards.
That voluntary cooperation, the document says, would involve letting the government “stay informed about AI risks … including by establishing sandbox and testing capabilities on the secure premises of federal agencies” and “[providing] American AI companies with the tools and classified threat intelligence to mitigate national security risks that are exacerbated by frontier models”.
This could be “overseen by the US Department of Commerce and in coordination with the AI Czar, perhaps by reimagining the US AI Safety Institute”.
An interview Lehane did with Axios offered a little more clarity: “A voluntary structure housed at a reimagined US AI Safety Institute could test models as part of a public-private partnership in exchange for liability protection from dozens of state-level AI laws, which are creating uncertainty in the market, Lehane said.”
Incentives for voluntarily working with the government could also include “creating glide paths for them to contract with the government”, the doc says.
On the international level, the US should “continue to represent American company interests in safety and security standards bodies, and encourage global regulators to adopt pro-growth safety and security policies.”
On export controls, it calls for “maintaining the AI diffusion rule’s three-tiered framework … but with some key modifications that expand the number of countries in Tier I”.
It also suggests that Tier I countries should ban “the use of PRC-produced equipment (e.g., Huawei Ascend chips) and models that violate user privacy and create security risks”.
And on copyright, OpenAI says that “applying the fair use doctrine to AI is not only a matter of American competitiveness—it’s a matter of national security”.
Also notable:
Former OpenAI employee Gretchen Krueger said the submission “doesn't read to me as particularly charitable or beneficial to all of humanity”, noting that it redefines the company’s mission statement.
In that Axios interview, Lehane said “Maybe the biggest risk [of AI] is actually missing out on the opportunity”. It’s a far cry from Sam Altman telling people that AI could mean “lights out for everybody” a couple years ago.
These quotes, though, can’t convey just how bad the overall vibes of the OpenAI submission are. No one reading it would be left with an accurate impression of what OpenAI leadership actually thinks is coming in the next couple of years — extremely disruptive, dangerous, and transformative technology.
It also, somewhat unsurprisingly given Lehane’s background, has the overwhelming feeling of being written by a Democrat cosplaying as a Republican.
Quizzed about the proposals on Twitter, OpenAI employee Roon defended it by saying “I think they are crafting their communication to the audience here”, and that “OpenAI has to do the hard work of actually dealing with the White House and figuring out whatever the hell they’re going to be receptive to”. He also noted that the doc does mention CBRN risk.
One last thought: hopefully this puts paid to the always absurd idea that companies pushed AI risk ideas in order to get a regulatory moat.
This is plenty long enough already, so I won’t bother summarizing comments from the Business Software Alliance or the Abundance Institute (though they’re worth a read).
The deadline for comments is midnight tomorrow, so I expect we’ll have a bunch more responses to unpack next week — I’m looking forward to seeing Meta’s, in particular.
The third draft of the EU Code of Practice for general purpose AI models was released.
It’s accompanied by this very good interactive website, which helps you understand what the text is doing and how it’s changed.
Luca Bertuzzi, one of the journalists who’s followed the AI Act most closely, said the new draft is “less prescriptive and more outcome-oriented, giving model providers more flexibility”.
That also aligns with my impression from reading the new draft. It’s much too long to summarize here, but a couple of important details:
“Large-scale illegal discrimination” is no longer classed as a systemic risk that companies always have to assess and mitigate.
Instead of specific security mitigations, the new draft now gives providers “more flexibility” in how they go about achieving the RAND SL3 security goal.
It’s also clarified a bunch of details on when and how external assessments are required, and reduced the amount of information sharing needed.
Not everyone’s happy though: Anselm Küsters of the Centre for European Policy said the draft still contains “serious ambiguity”, while CCIA Europe’s Boniface de Champris said that “serious issues remain … [including] burdensome external risk assessments”. DOT Europe’s Elias Papadopoulos also said it was “unfortunate” that mandatory third-party risk assessments remained in the code.
Expect lobbying to gear up as the code enters its final stage: there’ll be one more drafting round, and the code has to be done by May 2.
The discourse
Kudos to Sen. Ted Cruz, who is willing to follow his insane views all the way to their conclusion:
“There are those who present apocalyptic pictures of where AI is going to go. ... Look, I don't pretend to be smart enough to know if AI will someday take over the world and exterminate humanity. But I'll say this: If there are going to be killer robots, I'd rather they be American robots than Chinese robots.”
He also said that there are “serious problems with government being in charge of AI, including using so-called ‘safety’ standards to engage in rampant censorship”.
Kevin Roose explained his views on AGI:
“I believe that hardened AI skeptics — who insist that the progress is all smoke and mirrors, and who dismiss AGI as a delusional fantasy — not only are wrong on the merits, but are giving people a false sense of security … I believe that the right time to start preparing for AGI is now.”
Thane Ruthenis offered an interesting set of predictions, arguing that AI progress might actually not be that fast:
“I expect that none of the currently known avenues of capability advancement are sufficient to get us to AGI.”
Though that doesn’t mean everything’s fine: “At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI … By default, everyone will die in <1 year after that.”
Miles Brundage, on the other hand, is worried:
“‘When will we get really dangerous AI capabilities that could cause a very serious incident (billions in damage / hundreds+ of people dead)?’ Unfortunately, the answer seems to be this year, from what I can tell.”
Julia Villagra, OpenAI’s chief people officer, is also a bit on edge:
“We are on the brink of a massive societal transformation … there is a very real possibility that this will have a destabilizing impact if we are not prepared.”
Fin Moorhouse and Will MacAskill, writing for the new research organization Forethought, said we need to prepare for an intelligence explosion:
“AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession … These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making … we should be preparing, now, for the disorienting range of developments an intelligence explosion would bring.”
Policy
Senate Commerce approved Michael Kratsios' nomination as OSTP director. He said that he will “assess the AISI and help chart the best path forward for the institute”.
We also got a bunch of new Trump nominations:
Ethan Klein as associate director of OSTP.
Sean Plankey as next director of CISA.
Paul Dabbar as deputy secretary of Commerce.
Harry Kumar as assistant secretary of Commerce.
The DHS terminated the Artificial Intelligence Safety and Security Board, while the FCC announced a new Council for National Security, which includes AI among its priorities.
The FTC is reportedly moving forward with its antitrust investigation of Microsoft's AI operations and OpenAI partnership.
The House Energy and Commerce Committee is exploring repurposing brownfield sites for AI data centers.
Sens. Warren and Hawley are working together to strengthen export controls, with a focus on AI.
The UK is reportedly proposing a tech trade deal to the US, which Politico says contains “much less worry about safety, and much more concern about security and tech dominance”.
The proposal reportedly “avoids mention of thorny issues like tariffs and regulation”.
UK ministers’ usage of ChatGPT is apparently subject to FOI requests.
Influence
Mark Zuckerberg was reportedly at the White House again on Wednesday.
A coalition of tech groups, including the Software & Information Industry Association, TechNet, Americans for Responsible Innovation and the Center for AI Policy, warned Howard Lutnick against downsizing NIST, arguing instead for an approach that “aligns NIST’s world-leading expertise in standards and R&D with security and economic imperatives”.
The Chamber of Progress criticized New York's RAISE Act, claiming that it would “effectively crown the existing tech giants as the winners of the AI race while small model developers get tied up in red tape”.
Scott Wiener’s SB 53 got endorsement from Dean Ball and a16z’s Martin Casado.
Silicon Valley Leadership Group CEO Ahmad Thomas wants more data centers built in California.
Industry
Manus, a new agentic AI product from Chinese company Monica, was launched to much fanfare.
A bunch of people said this was a “second DeepSeek moment” and that it signalled we’d have AGI by the end of the year. Both of those are, obviously, nonsense.
It seems to be a slightly better version of OpenAI’s Operator — impressive, but nothing mindblowing, especially given that it uses Claude as one of its underlying base models. It tells us a lot about how keen people are to hype up Chinese AI products, though.
Manus also announced a strategic partnership with Alibaba.
DeepSeek parent company High-Flyer has reportedly forbidden some of its employees from travelling abroad, asking them to hand in their passports.
DeepSeek CEO Liang Wenfeng, meanwhile, is reportedly more concerned with research than revenue, and is not raising money.
CoreWeave signed a five-year, $11.9b contract with OpenAI.
Google reportedly owns 14% of Anthropic, can never have a stake bigger than 15%, and plans to invest an additional $750m in September through convertible debt.
Anthropic’s annualized revenue reportedly grew to $1.4b this month.
Meta is reportedly testing its first in-house AI training chip.
TSMC reportedly pitched Nvidia, AMD, Broadcom, and Qualcomm about taking stakes in a joint venture to operate Intel's fabs, which would be run by TSMC.
Amazon, Google, and Meta joined a pledge to support tripling global nuclear capacity by 2050.
A bunch of releases from Google:
OpenAI launched new tools to help developers build AI agents.
Alibaba released an open-source AI model which it says can read emotions from video.
SoftBank bought a $676m former Sharp plant in Japan, which it’s converting into an AI data center for its partnership with OpenAI.
Cursor developer Anysphere is reportedly in talks to raise at a valuation close to $10b.
ServiceNow bought AI assistant company Moveworks for $2.85b.
Celestial AI, which makes a silicon photonics device to speed up data transfer in AI data centers, raised $250m at a $2.5b+ valuation.
Norm Ai, which develops AI agents for regulatory compliance automation, raised $48m.
Moves
Lip-Bu Tan is Intel’s new CEO. Notably, he was an early investor and board member in SMIC.
Google DeepMind disbanded its "product impact unit" and set up a new “applied AI” team.
The Information Technology Industry Council is launching a new state advocacy program, to be run by Nathan Trail.
Robert Boykin is TechNet’s new executive director for California and the Southwest.
Mitchell Kominsky joined Americans for Responsible Innovation as vice president of government affairs.
Karen Kornbluh, formerly director of the National AI Office, is joining the Center for Democracy & Technology as a visiting fellow.
Matija Franklin is joining Google DeepMind.
Brian Chau — well-known to Transformer readers for his history of racist and sexist comments — stepped down as Executive Director of Alliance for the Future, claiming he’s “tired of winning”.
Eric Schmidt became CEO of rocket startup Relativity Space.
Best of the rest
New OpenAI research found that you can detect model reward hacking by looking at its chain-of-thought, but that “directly optimizing the CoT to adhere to specific criteria (e.g. to not think about reward hacking) … can cause a model to hide its intent”.
It recommends “against applying strong optimization pressure directly to the CoTs of frontier reasoning models, leaving CoTs unrestricted for monitoring”.
The 2022 “The Alignment Problem from a Deep Learning Perspective” paper has been updated to include empirical evidence of concerning capabilities we’ve seen in AI models since then.
Anthropic published some new research on “auditing language models for hidden objectives”.
In a new analysis of DeepSeek, RAND’s Carter C. Price and Brien Alkire argue that “US policies that constrain China's access to chips for training pushed Chinese firms to focus on optimizing performance”.
RAND’s Lennart Heim, meanwhile, has some great analysis of what Huawei’s new Ascend 910C means.
A poll by Americans for Responsible Innovation found that 76% of voters worry about foreign actors using AI against the US.
Dan Hendrycks responded to my criticism of his paper from last week.
Anton Leicht has a good new piece arguing that AI’s “securitization trend is not fully justified, and the resulting drift is regrettable”.
Miles Brundage thinks some people are too pessimistic about the chances of improving AI security to the necessary levels.
A group of AI researchers proposed a new way to handle third-party disclosure of model vulnerabilities.
Justin Bullock, Samuel Hammond and Seb Krier warned that AGI could disrupt the “delicate balance between state capacity and individual liberty”.
FLI’s Anthony Aguirre argued for limiting AI development to "Tool AI" rather than AGI.
OpenAI is training a new model to be better at creative writing; a sample from it got very mixed receptions — lots of criticism, but praise from at least one author.
Thanks for reading — have a great weekend, and see you next week.
Thank you very much, excellent work. I have searched everywhere for Miicrosoft's answer, without success. Do you know anything about its document?
Wow dhs terminated the ai safty board