Transformer Weekly — Nov 8
Schumer framework’s “dead” | Cruz will chair Senate Commerce | Kratsios is back
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
Republicans won the presidency and flipped the Senate. The House looks like it’s staying red. So what does that mean for AI?
On Transformer, I looked at everything we know about Trump’s stance on AI safety.
The TL;DR is that he’ll likely repeal Biden’s executive order and replace it with something else (though we don’t know what that something will look like).
Some of his advisors are very anti AI regulation, others (notably Elon) are very worried about AI risks and think governments need to act.
Basically, it’s all very uncertain. Read the full piece here.
One correction on that piece: I said “much of US AI governance — including the AI Safety Institute — is currently based on the executive order”. But as several helpful readers pointed out, AISI isn’t actually mentioned in the EO.
That said, the EO does direct NIST to do work on things like AI safety standards and evals, which is work that’s being done by AISI. So it does seem like repealing it would have a pretty big impact on AISI’s work — although Trump’s new EO could of course give NIST and AISI many of the same tasks (though it’ll likely be a different mix of things).
(As always, I’d love to get your thoughts and opinions: just reply to this email!)
Since I published that piece on Wednesday, we’ve learnt some new info:
Michael Kratsios and Gail Slater are reportedly handling tech policy for the Trump transition team.
Kratsios was CTO in Trump’s last administration, and currently works as a MD at Scale AI.
Slater is JD Vance’s economic policy advisor.
And according to Politico, favourites for Commerce Secretary include Robert Lighthizer, Linda McMahon, and Bill Hagerty.
On the Congressional side:
Politico reports that the Schumer AI framework is “likely dead”.
One of the favourites for the new Senate Majority Leader is Sen. John Thune — who notably has a bipartisan AI bill of his own.
Sen. Ted Cruz — who has been very critical of AI regulation — is set to chair the Senate Commerce Committee.
Tech CEOs jostled to congratulate Trump, including Sam Altman.
Notably, Anthropic did not offer congratulations: Jack Clark simply said “Anthropic will work with the Trump Administration”.
The discourse
In the wake of the “China’s using Llama” report, open-weight AI advocates have been out in force:
Dean Ball and Keegan McBride:
“Rather than focusing on the risks of open source AI, policymakers should ask whether the world should rely on US-developed AI – or the increasingly capable open source models from China.”
The Economist’s leader:
“True, open-source models can be abused, like any other tech. But such thinking puts too much weight on the dangers of open-source AI and too little on the benefits. The information needed to build a bioweapon already exists on the internet and, as Mark Zuckerberg argues, open-source AI done right should help defenders more than attackers.”
(Citation needed!)
Ben Horowitz claimed that AI progress is slowing down:
“We've really slowed down in terms of the amount of improvement... we're increasing GPUs, but we're not getting the intelligence improvements, at all.”
(Citation needed!)
Miles Brundage has been blogging like crazy, and has come up with a new metaphor:
“There is too little safety and security ‘butter’ spread over too much AI development/deployment ‘bread’”.
Policy
Applied Materials and Lam Research reportedly directed suppliers to find alternatives to Chinese components and ditch Chinese investors.
TSMC and GlobalFoundries reportedly completed negotiations for billions in CHIPS Act grants and loans.
The Pentagon awarded its first “generative AI defence contract” to Jericho Security.
State legislatures introduced 693 AI-related bills across 45 states this year, up from 191 last year. 113 were enacted.
The UK signed an AI safety agreement with Singapore.
Peter Kyle pledged to introduce AI safety legislation within a year.
The UK released a report on the AI assurance market, arguing it could grow to £6.5b by 2035.
It also released a new platform “giving British businesses access to a one-stop-shop for information on the actions they can take to identify and mitigate the potential risks and harms posed by AI”.
Denmark launched its national AI supercomputer, which has 1,528 H100s and is financed by Novo Nordisk (of Ozempic fame).
Influence
Microsoft and Andreessen Horowitz issued a joint memo on how AI policy can help startups. It calls for, among other things:
“a science and standards-based approach that recognizes regulatory frameworks that focus on the application and misuse of technology”
“enabling choice and broad access”
“a regulatory framework that protects open source”.
The Mozilla Foundation shut down its advocacy division and laid off 30% of its staff.
It said that “advocacy is still a central tenet of Mozilla Foundation’s work and will be embedded in all the other functional areas”, though.
Industry
Anthropic partnered with Palantir and AWS to provide its models to US defence and intelligence agencies.
Anthropic said the agencies will use its tools for intelligence analysis, decision-making, and operational efficiency. It noted that its terms of service prevent its models being used for weapon design or deployment, among other things.
Meta also announced this week it’s making its Llama models available for national security applications — though as they’re open-weight models, Meta’s permission was never really needed.
Amazon is reportedly discussing a second multibillion-dollar investment in Anthropic, contingent on Anthropic using more Amazon Traininum chips.
T-Mobile reportedly agreed to pay OpenAI $100m over three years. It’ll use OpenAI’s models to develop a customer service chatbot.
OpenAI reportedly entered discussions with the California AG’s office about converting to a for-profit.
Read Transformer’s recent piece for more on this conversion, and the fears that OpenAI will rip its non-profit arm off.
Nvidia is snapping up manufacturing capacity: it’s reportedly reserved half of TSMC’s CoWoS capacity for 2025; and its orders with Foxconn might stop Apple from being able to make as many AI servers as it wants.
SK Hynix announced new AI memory chips.
TSMC said it’s paying more for electricity in Taiwan than anywhere else.
Huawei is reportedly trying to poach TSMC engineers by tripling their salaries.
AMD’s datacenter revenue surpassed Intel’s for the first time. Both trail far behind Nvidia.
Microsoft reportedly plans to spend $10b with CoreWeave through 2030.
The FT has a piece on the $11b neocloud debt market.
Perplexity is reportedly raising $500m at a $9b valuation, led by Institutional Venture Partners.
It seems like it handled election news well, too.
Anysphere, which makes the Cursor AI coding assistant, is reportedly raising at a $2.5b valuation.
Mercor, which OpenAI and other AI companies use to find contractors, is reportedly raising at a $2b valuation.
Physical Intelligence raised $400m from investors including Jeff Bezos and OpenAI.
Super Micro stock continues to plunge, and it might be delisted from the Nasdaq.
Moves
Caitlin Kalinowski, who led the AR glasses hardware team at Meta, joined OpenAI to lead robotics and consumer hardware.
Gabor Cselle is joining OpenAI.
Alex Rodrigues joined Anthropic as an alignment researcher.
Will Grathwohl is leaving Google DeepMind.
The Information’s Kate Clark is joining Bloomberg.
The International Association for Safe and Ethical Artificial Intelligence launched. Stuart Russell is the president, Mark Nitzberg the interim executive director, and lots of excellent names are on the steering committee.
It’s hosting its first conference in Paris, just ahead of the AI Action Summit.
Apollo Research is hiring an EU policy researcher.
Concordia AI is hiring for several roles in Beijing and Singapore.
Best of the rest
On Transformer: Energy constraints and burdensome regulations are increasingly pushing AI companies to build data centres in the Middle East. But that could cause big problems back home.
A judge dismissed Raw Story and AlterNet’s copyright lawsuit against OpenAI.
She said “the likelihood that ChatGPT would output plagiarised content from one of Plaintiffs’ articles seems remote”. The plaintiffs are allowed to refile.
Epoch released two interesting papers:
A Google DeepMind AI agent “discovered its first real-world security vulnerability”.
Anthropic published some “sketches” of safety cases.
Someone managed to jailbreak o1-mini to get it to show its chain-of-thought reasoning.
Thanks for reading; see you next week.