Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
At a Washington Post event, Peter Kyle elaborated on the UK’s plans for AI regulation.
He emphasised that the AI bill will be a “very focused piece of legislation on frontier AI”. The main thing it’ll do, he said, is give government the power to compel companies to release models to AISI for pre-release testing.
“I don't think that there is, at the moment, the need for that compulsion, but I don't know what the future says … So there's lots of reasons why having a statutory footing to it is the most important part of it,” he said.
Speaking of AISI, Kyle understands its key strength:
“I would say that its greatest accomplishment is actually … building great relationships with the companies themselves, and in certain areas, I think the frontier AI companies would describe the relationship they have as akin to much more of a partnership than one of … a standard regulator.”
OpenAI’s internal projections reportedly forecast training compute costs growing to $9.5b by 2026 (compared to $3b this year), according to The Information.
Losses are projected to grow accordingly, hitting $14b by 2026.
The company thinks it can be profitable by 2029, though.
It reportedly spent $340m in cash in the first half of this year, but net losses are higher because of non-cash expenses like stock compensation and computing costs.
Including those, it spent $3b in H1 2024.
For 2024 overall, it’s expecting $4b in revenue. Microsoft gets $700m of that, and OpenAI spent $3b on training compute and $2b on inference compute.
Add on other costs, and it’s expecting $5b in losses for the year.
Two of this week’s Nobel Prizes went to AI researchers: Geoffrey Hinton and John Hopfield won the physics prize, while Demis Hassabis, John Jumper and David Baker bagged the chemistry Nobel.
It’s a big win for a bunch of groups, too:
The AI field as a whole: as Shirin Ghaffary writes, “the back-to-back Nobel wins this week offer a moment of validation”.
Google, where Hassabis and Jumper currently work and Hinton previously worked.
Open Philanthropy, a long-time funder of Baker’s work.
And the UK — not that you’d know it from government comms.
Hinton used much of the Nobel media circuit to highlight the dangers of AI, especially the existential risks.
That was less true for Hassabis, though he “agreed with Hinton that more people should be researching AI safety, and spoke encouragingly of the international AI safety institutes that have started testing AI models for harmful risks”.
The funniest moment, though, was when Hinton commented on former student Ilya Sutskever: “I’m particularly proud of the fact that one of my students fired Sam Altman”.
The discourse
Elon Musk told Tucker Carlson that he wants Trump to regulate AI:
“I would certainly push for having some kind of regulatory body that at least has insight into what these companies are doing and can ring the alarm bell.”
Ylli Bajraktari said the next president might have to deal with AGI — and they need to make sure China doesn’t get it:
“Amidst the colossal domestic and international challenges that will confront the next U.S. president—Ukraine, Taiwan, the Middle East, the economy, and a politically charged environment—ensuring that the United States wins the AGI race will be seen, retrospectively, as the most important.”
Kevin Roose thinks the tech industry might regret killing SB 1047:
“I think there may be a point where the AI industry wishes that what it had gotten, instead of this patchwork of little use-based regulations, was one or a handful of big, broad regulations that apply to only the companies that have the most money.”
OpenAI does not like Elon Musk’s lawsuit:
“The suit is the latest move in Elon Musk’s increasingly blusterous campaign to harass OpenAI for his own competitive advantage.”
Policy
The DOJ is considering forcing Google to break up. DeepMind doesn’t seem to be a focus of the discussions, though AI search features are discussed a bunch in the filing.
Reps. Obernolte and Lieu introduced a bill to encourage AI R&D through NSF-run “grand challenges”. It’s the counterpart to a Senate bill from earlier this year.
Sens. Markey and Hickenlooper said they’d work on building bipartisan support for assessing AI's environmental impacts.
The UK is using a Palantir AI model to draw up its Strategic Defense Review.
The UK also launched a Regulatory Innovation Office to accelerate approval of AI (and other tech) in sectors like healthcare and space.
Japan's AI Safety Institute released a guide on red teaming methodologies.
Influence
Ben Horowitz said he’s making a “significant” personal donation to Kamala Harris, but that a16z doesn’t yet support her.
The New Yorker published a great profile of Chris Lehane, OpenAI’s new lobbying chief, and his long history of deceptive campaigns.
Nvidia held a conference in DC all about how great and useful its technology is, particularly for government agencies and academia.
Communities opposing data centres being built in their neighbourhoods are building “a collective playbook for obstructing the data centre gold rush”, sharing information to strengthen their campaigns.
OpenAI’s hiring a “head of internet creators” to improve its relationships with influencers.
Industry
It seems OpenAI’s going to become a public benefit corporation, similar to Anthropic and xAI.
The FT reports that the non-profit entity would continue to exist, and “would have access to research and technology but solely focus on pursuing OpenAI’s mission of benefiting humanity”. It probably wouldn’t be run by Altman, the FT says.
OpenAI is reportedly seeking more control over its compute, and thinks Microsoft isn’t moving quickly enough to build data centres.
Wired has a good piece on the exodus of researchers from OpenAI, seemingly driven by a shift to a more commercial focus.
OpenAI reported an unsuccessful phishing attempt on its employees by a seemingly China-linked group.
It also said it’s disrupted over 20 operations that tried to use its models to influence elections.
Meta AI expanded to six new countries, including the UK — but not the EU, in part because of GDPR.
Meta announced Movie Gen, a new video-generating AI model. It’s not releasing it yet, though,
AMD launched the Instinct MI325X AI chip, which it wants to compete with Nvidia’s Blackwell GPUs. It will go into production by the end of the year.
AMD plans to produce high-performance chips at TSMC's Arizona facility starting next year, according to Tim Culpan.
And Amkor and TSMC have agreed to collaborate on packaging and testing — including CoWoS — at that fab.
TensorWave launched an AMD-powered cloud for AI training and inference.
Fei-Fei Li’s World Labs will use Google Cloud as its primary compute provider.
Lots of fundraising news this week:
Bret Taylor’s AI startup Sierra is reportedly raising at a $4b+ valuation.
Abridge, which offers an AI tool to transcribe doctor-patient conversations, is reportedly raising $250m at a $2.5b valuation.
Writer is reportedly raising $200m at a $1.9b valuation. Its new model only cost $700k to train.
EvenUp, an AI startup for personal injury law, raised $135 million at a $1b+ valuation.
Auger, an AI-powered supply chain management startup, raised $100m.
Suki raised $70m at a reported $500m valuation to develop AI voice assistants for hospitals.
Basecamp Research raised $60m to build an AI agent for biology.
Braintrust, the Altman-backed AI eval startup, raised $36m at a reported $150m valuation.
Numeric, an AI-powered accounting software company, raised $28m.
Moves
NIST Director Laurie Locascio is leaving. She’ll become president and CEO of the American National Standards Institute.
OpenAI appointed Oliver Jay as Managing Director for International.
It announced new offices in NYC, Seattle, Paris, Brussels, and Singapore (which will be its APAC hub).
The lobbying-focused Brussels office will be staffed by Olga Nowicka (ex-Workday), Jakob Kucharczyk (ex-Meta) and Rafaela Nicolazzi (ex-Google).
Liam Fedus now leads post-training at OpenAI, replacing Barret Zoph.
Matt Brittin, Google's EMEA president, stepped down.
Matt Wood, AWS’s VP of AI products, left Amazon.
Peter J Liu left Google DeepMind to “work on something new”.
He said “recent competitive trends and a paradigm shift in scaling compute is drying up the moat around traditional foundation model companies”.
Nin Pandit, who worked on the AI Safety Summit and AISI, is Keir Starmer’s new principal private secretary.
Anthropic is considering Claude’s productivity-boosting features when thinking about how many developers to hire in future.
Best of the rest
OpenAI launched a new benchmark to assess AI systems’ performance on machine learning engineering tasks. o1 achieves at least a bronze medal in 17% of tasks.
A group of Chinese researchers released an open-source, very fast, video generation model.
MITRE and other organisations launched an AI incident-sharing initiative.
RAND and METR launched “Project Canary”, an effort to build better AI evaluations. It’s got $38 million in funding from the Audacious Network.
FLI gave the Federation of American Scientists $1.5m to study “the implications of artificial intelligence on global risk”.
Wired has an in-depth profile of Jake Sullivan, focused on his China hawkishness. Chips and AI come up a lot.
The FT reported on how Chinese AI startups are launching AI products targeting the US market.
SemiAnalysis has a mega-detailed piece on how to build and operate an AI neocloud.
Epoch AI estimates that Nvidia has sold the equivalent of ~3m H100s since 2022. Microsoft seems to have the biggest chunk of them, but Google’s TPU hoard means it’s likely got more compute overall.
Lots of coverage this week on how AI is boosting demand for nuclear energy, while the WSJ warns that there are similarities to the internet-driven power bubble of the ‘90s.
TSMC's power consumption is projected to double to 15.6% of Taiwan's total electricity by 2030.
OpenAI signed a partnership deal with Hearst.
A new high score (49%) was achieved in the ARC-AGI competition.
A Brookings study found that generative AI could significantly impact 30% of workers' tasks, with women in non-unionised office jobs facing the highest risk of disruption.
Researchers found Twitter removed AI-generated nude images reported as copyright violations within hours, but ignored those reported as nonconsensual nudity for weeks.
The organisers of AI scenario game Intelligence Rising wrote up what they’ve learnt from running the sessions.
Hurricane Helene has led to a scourge of fake AI-generated images going viral.
I wrote about how I use AI tools to help write this newsletter.
Thanks for reading, have a great weekend.