Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
On Transformer: A draft of new AI export controls was leaked by Inside AI Policy. You’ve already received my roundup in your inbox, but here’s a very quick summary.
The draft rule is focused squarely on the frontier: they only target the most advanced models around, and have thresholds that will rise each year so they stay at the frontier.
The draft creates a new, three tier system, and strictly controls Tier 2 countries’ access to chips and model weights.
The rule imposes a bunch of restrictions designed to force companies to improve their data centre security, with some pretty stringent requirements (such as making sure facilities meet FedRAMP High standards).
The main criticism of the bill from the tech industry — that it will make other countries use Chinese chips instead of American ones — doesn’t really make sense.
If the rule is going to come (it might not!), it’ll be in the next few days: Biden doesn’t have long left in office, after all. To be fully prepared you should, of course, read my full piece here.
California Senator Scott Wiener introduced SB 53, a placeholder bill which “would declare the intent of the Legislature to enact legislation that would establish safeguards for the development of AI frontier models”.
The bill would also “build state capacity for the use of AI” and “may include, but is not limited to, the findings of the Joint California Policy Working Group on AI Frontier Models established by the Governor”, citing the task force that Gavin Newsom set up when he killed SB 1047.
It’s confirmation that Wiener is gearing up for another round of trying to regulate AI, despite last year’s defeat.
Meanwhile, New York assembly member Alex Bores announced the Responsible AI Safety and Education Act.
MIT Tech Review reports that the bill would require AI companies to develop safety plans and provide whistleblower protections.
It defines “critical harm” similarly to SB 1047, focusing on CBRN risks and billion-dollar levels of damage.
Unlike 1047, the new bill would not create a new government body, nor would it require a shutdown provision.
And In other state-level news, Connecticut senators said AI legislation was a top 2025 priority.
Sam Altman kicked off the new year with a bunch of new info and big pronouncements:
In a blog, he said “we are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”
“We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word.”
Meanwhile, in a Bloomberg interview he gave his perspective on the 2023 board fight:
“I had had issues with various board members on what I viewed as conflicts or otherwise problematic behavior, and they were not happy with the way that I tried to get them off the board.”
“I don’t think I was doing things that were sneaky. I think the most I would say is, in the spirit of moving really fast, the board did not understand the full picture.”
And this bit is long, but worth quoting in full:
“I think the previous board was genuine in their level of conviction and concern about AGI going wrong. There’s a thing that one of those board members said to the team here during that weekend that people kind of make fun of her for, 13 which is it could be consistent with the mission of the nonprofit board to destroy the company. And I view that—that’s what courage of convictions actually looks like. I think she meant that genuinely. And although I totally disagree with all specific conclusions and actions, I respect conviction like that, and I think the old board was acting out of misplaced but genuine conviction in what they believed was right. And maybe also that, like, AGI was right around the corner and we weren’t being responsible with it. So I can hold respect for that while totally disagreeing with the details of everything else.”
(He then goes on to accuse an “old board member” of “leaking fake news to the press”, which, as Zvi Mowshowitz points out, is a bit rich given how much fake news Altman and his allies leaked to the press during that period.)
In other Altman news: Sam’s sister, Annie, sued her brother, accusing him of regular sexual abuse when she was a child. Sam and the rest of his family strongly denied the claims.
The discourse
Dean Ball thinks the proposed Texas Responsible AI Governance Act is very bad:
“TRAIGA, despite its improvements, remains a Lovecraftian regulatory nightmare.”
Jason Green-Lowe is not impressed with the House AI Task Force report:
“It falls short of addressing the most crucial challenge of our time: preventing catastrophic risks from advanced artificial intelligence.”
Niklas Zennström thinks European AI companies should focus on developing applications, not models:
“It’s not like everyone needs to be a large language model . . . You can create value as an application provider.”
Garrison Lovely said AI progress is increasingly illegible — but that doesn’t mean it’s not happening.
“The big risk is that policymakers and the public tune out this progress because they can't see the improvements first-hand.”
Policy
Donald Trump announced a $20b investment in US data centres, funded by Emirati billionaire Hussain Sajwani.
Britain plans to criminalise creating and sharing sexually explicit deepfakes without consent.
Influence
Brad Smith published a long essay about how Microsoft sees a “golden opportunity for American AI”.
The R Street Institute made a bunch of seemingly sensible AI policy suggestions, suggesting that government “should incorporate risk-tolerance principles -- which involve defining acceptable risks -- into any AI regulation and governance solutions”.
Industry
Anthropic is reportedly raising $2b at a $60b valuation.
OpenAI is losing money on the $200/month ChatGPT Pro plan, Sam Altman said.
Elon Musk's lawsuit with OpenAI took a new turn, with him asking California and Delaware’s attorneys-general to force an auction of OpenAI’s non-profit’s stake in the for-profit entity.
Mark Zuckerberg allegedly personally approved using pirated e-books to train Meta’s AI models, according to a new lawsuit.
Nvidia announced a bunch of stuff at CES: new Nemotron models focused on AI agents, a desktop “supercomputer” for running AI models locally, and new “world models” that will be useful for robotics.
Bloomberg has a good piece on the recent flurry of interest in world models.
Microsoft announced a $3b investment in Indian data centre capacity over the next two years.
It also released its Phi-4 model as fully open-source on Hugging Face, with downloadable weights and an MIT license for commercial use.
And it rolled back Bing Image Creator after users complained that a new version of DALL-E 3 created worse images.
AWS said it plans to invest at least $11b to expand data centre infrastructure in Georgia.
xAI launched a Grok iOS app.
Eric Schmidt is reportedly working on an AI video generation startup.
Arm is reportedly considering acquiring Ampere.
AMD invested $20m in AI drug-discovery company Absci.
Healthcare startup Hippocratic AI raised $141m at a $1.64b valuation.
Moves
Dana White, John Elkann and Charlie Songhurst joined Meta’s board.
Josh Arnold joined Andreessen Horowitz as government affairs partner, leading Senate Republican engagement.
Liz O'Bagy joined TechNet as director of federal policy and AI policy lead.
OpenAI, which already has 12 people working on policy in DC, is tripling the size of that team.
ARC Prize is becoming a non-profit foundation, with Greg Kamradt as president.
Best of the rest
The soldier suspected to have blown up a Cybertruck in Vegas last week used ChatGPT to help plan the attack, police said, apparently asking the model for info on how to build an explosive.
OpenAI said “ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities”.
Apple’s new notification summarising tool keeps inaccurately summarising news stories.
Cybersecurity company Goldilock said that agentic malware might become a pressing concern in two years.
AI data centers are reportedly causing power distortions in nearby homes.
Thanks for reading — have a great weekend.