No one can ever know, for sure, what Donald Trump will do — on any topic. But a variety of previous remarks from Trump and his closest advisors give us some clues of how the president will treat AI in his second term. Here’s what we know.
What he’s said
Trump's most substantive comments on AI came, bafflingly, in an interview with Logan Paul earlier this year. In that interview, he said that AI “is a superpower”, “very disconcerting”, and “alarming”.
Unsurprisingly, he emphasised the need for the US to stay ahead of China on AI: “We have to be at the forefront. It's going to happen. And if it's going to happen, we have to take the lead over China.” He said expanding America’s electricity production capacity was crucial for doing this.
Trump also acknowledged the possibility of superintelligence — which he called “super duper AI — and of control risks, though without taking a clear position. “You know, there are those people that say it takes over the human race. It's really powerful stuff, AI. So let's see how it all works out.”
Repealing the executive order
More concretely, Trump has promised to repeal last year’s executive order on AI. “When I’m re-elected, I will cancel Biden’s artificial intelligence executive order and ban the use of AI to censor the speech of American citizens on day one,” he said last December.
The Republican National Committee's policy platform reinforced this position, describing the order as “dangerous” and “[imposing] Radical Leftwing ideas on the development of this technology”.
Given that much of US AI governance — including the AI Safety Institute — is currently based on the executive order, a repeal could have big implications. Trump’s election likely adds significant urgency to the campaign to formalise the institute in legislation by the end of the year.
Replacing the executive order
Rumours abound that rather than just repealing the order, a new Trump administration would replace it with something else.
According to the Washington Post, the America First Policy Institute has drafted a new EO, which would establish “Manhattan Projects” for military use of AI, review “unnecessary and burdensome regulations”, and create “industry-led” agencies for model evals and security.
The proposed framework's "Make America First in AI" section further indicates that US-China competition would be central to Trump's AI strategy. Even more evidence for that comes from the involvement of China hawk and Palantir advisor Jacob Helberg, who has reportedly prepped “an executive order … that would dismantle the Biden administration’s rules on artificial intelligence”. (It’s worth noting, though, that Helberg has also expressed concerns about AI-assisted terrorism and biowarfare.)
The anti-regulation coalition
Helberg isn’t the only tech figure in Trump’s orbit. JD Vance, America’s next vice-president, has previously said he’s worried that pushes for AI regulation are a form of regulatory capture, and has loudly supported open-source AI. Vance is a protégé of Peter Thiel who, while once worried about AI risks, more recently seems to have completely changed his mind (and is now much more worried about China winning the AI race).
Other VCs in Trump’s circles share these ideas — most notably Marc Andreessen, who explicitly said he was backing Trump because of his stance on AI. For most of the election, this anti-regulation group of investors were the loudest voices on AI in Trump’s orbit. Until…
The Elon factor
In the past couple months, Elon Musk has catapulted himself to Trump’s inner circle. Given he runs an AI company and co-founded OpenAI, it’s likely that he’s now one of Trump’s closest advisors on AI policy (and Trump said back in June that he’s an AI advisor, at least).
Musk was one of the first people to take the possibility of catastrophic AI risk seriously, and has talked at length about the threat AI poses to humanity. Just last week, he said AI is a “significant existential threat”, and that there’s a 10% to 20% chance it “goes bad”.
Musk is also supportive of AI regulation. He recently told Tucker Carlson that he wants Trump to regulate AI, saying that he “would certainly push for having some kind of regulatory body that at least has insight into what these companies are doing and can ring the alarm bell”. He also supported SB 1047, noting at the time that he has “been an advocate for AI regulation” for “over 20 years”. Dan Hendrycks, one of the people closely involved with the bill, works with Musk as a safety advisor at xAI.
And Musk’s not the only one in Trump’s orbit supportive of regulation. Samuel Hammond, a participant on Project 2025’s AI Policy Committee, has said “that Trump’s supposed shadow transition takes AGI and its associated risks seriously”. (Of course, the influence Project 2025 will actually have on Trump’s administration is completely unknown.)
Situationally aware
Perhaps the weirdest tea leaf to read is a September tweet from Trump’s daughter, Ivanka, in which she said she has read Situational Awareness — Leopold Aschenbrenner’s attempt to AGI-pill DC policymakers.
Ivanka called the essay an “excellent and important read”, noting that it “predicts we are on course for [AGI] by 2027, followed by superintelligence shortly thereafter, posing transformative opportunities and risks”.
The essay attempts to push several ideas on readers: that AGI is imminent, that it will be utterly transformative, that it is imperative the US build it before China, and that the US should probably put significant state resources into building AGI to ensure it beats China. If Ivanka and her father have taken its conclusions on board, that adds even more weight to the theory that Trump’s AI policy will be laser-focused on beating China — despite all the risks that come with such a race dynamic.
Trump’s track record
Despite the fun of reading way too much into singular tweets, it would be remiss to predict Trump’s future actions on AI without also looking at what he did last time he was in office. And as recent articles in The Atlantic and Bloomberg Law have laid out, AI policy in Trump’s first term was actually remarkably similar to Biden’s. As The Atlantic notes, “Biden’s ‘dangerous’ executive order echoes not one but two executive orders on AI that Trump himself signed.”
That said, it’s unclear how much to read into Trump’s past actions on AI. Those came at a very different time: one where AI was not nearly as politically salient, and before today’s battle lines were drawn. Trump’s past actions may therefore not reveal much about what he’ll do next.
A change of personnel
One thing we do know is that many of the biggest advocates for AI regulation will be leaving the White House. It’s thought that much of the Biden administration’s actions on AI safety have been down to the concerns of a few highly-motivated people: folks like Ben Buchanan and Bruce Reed.
They will now be leaving the White House — and it’s not clear that similarly motivated people will replace them.
The only certainty is uncertainty
While themes of US-China competition and a lighter-touch approach to regulation recur, the reality is that as with all things Trump, it’s very hard to say what he’ll do next.
Perhaps Musk will steer him towards impactful AI regulation — or perhaps the two will have a huge falling out. Perhaps JD Vance will take the lead on AI policy — or perhaps he’ll be sidelined, just as Mike Pence was. As with so many things in AI, the only certainty is uncertainty.