Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Housekeeping
The weekly briefing will be off next week; back the following.
This week Tarbell launched Tarbell Grants: grants of $1,000-$15,000 for impactful AI journalism. You can learn more and apply on our website.
Top stories
Dario Amodei published a lengthy essay outlining what a good world with transformative AI could look like.
It’s worth reading in full, examining how AI could lead to big advances in medicine, economic development, and global governance — though it takes a notably more grounded approach than other scenarios have. A few highlights:
“I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.”
“You might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments … Intelligence may be very powerful, but it isn’t magic fairy dust.”
“It seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world.”
“My current guess at the best way to do this is via an “entente strategy”, in which a coalition of democracies seeks to gain a clear advantage … on powerful AI … This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy.”
(See Max Tegmark’s critical response to this section.)
Anthropic is currently in fundraising mode: in that light, the essay could be seen as an elaborate sales pitch.
But I think there’s more to it than that. This, as with Sam Altman’s recent piece and Leopold Aschenbrenner’s Situational Awareness, is an attempt to get people to start seriously engaging with the question of what transformative AI might mean for the world.
Amodei, Altman and others genuinely believe that we are on the cusp of a world-changing invention. I get the impression that they think we are not doing nearly enough to prepare for that possibility, and that they’d really like to not be the only ones thinking about this.
Helen Toner recently pointed out that there’s a huge disconnect between the policy world, which treats “AGI” as science fiction, and the people who work at AI labs, who think it’s something we might build in the very near future. That disconnect is causing real problems, particularly when it comes to the speed of policy change, and runs the risk of society being blindsided if AGI does arrive imminently.
In that light, Amodei’s essay might just be one of many attempts to try to bridge the divide. It likely won’t be the last.
Meanwhile, there were a couple stories this week on high-profile partnerships beginning to fray:
Microsoft and OpenAI are reportedly squabbling.
Mustafa Suleyman reportedly “yelled at an OpenAI employee during a recent video call because he thought the start-up was not delivering new technology to Microsoft as quickly as it should”, while other OpenAI employees “took umbrage after Microsoft’s engineers downloaded important OpenAI software without following the protocols the two companies had agreed on”.
The two are currently also negotiating what Microsoft’s equity stake in the new OpenAI PBC should be.
TSMC and Nvidia are also arguing, The Information reported, with the companies blaming each other for Blackwell production hiccups.
Nvidia is reportedly discussing getting Samsung, instead of TSMC, to manufacture its new gaming GPUs.
The discourse
Even notable AGI-sceptic Yann LeCun thinks human-level AI could come very soon:
“Reaching human-level AI will take several years if not a decade … [though] it could take much longer than that.”
Gillian Hadfield thinks we need regulation for AI agents, urgently:
“The existential risk that keeps me up at night is what seems to be the very real and present risk that we’ll have a flood of AI agents joining our economic systems with essentially zero legal infrastructure in place.”
Mark Ruffalo and Joseph Gordon-Levitt are not happy with Gavin Newsom’s SB 1047 veto:
“Newsom insists that he doesn’t believe in a ‘wait and see’ approach. But this is doublespeak: he’s saying one thing and doing exactly the opposite.”
Tom Wheeler is worried about federal agencies’ ability to handle artificial superintelligence (his words, not mine!):
“By crippling the agencies with [AI] expertise, the [Supreme] Court has passed front-line AI decision-making to the AI companies themselves.”
On Transformer, I made the case for if-then commitments as an AI policy framework:
“If done right, if-then commitments are a neat and evidence-based way of tackling a thorny problem. They allow us to bypass debates about if or when certain capabilities will arise and jump straight to discussing what to do if they do.”
Policy
The Commerce Department is reportedly investigating whether TSMC made AI or smartphone chips for Huawei. TSMC said it complies with all export controls.
Commerce is also reportedly considering capping the number of AI chips US companies are allowed to sell to certain countries, with a focus on the Middle East.
And Commerce announced a preliminary agreement with Wolfspeed to fund construction of a silicon carbide wafer facility in North Carolina.
NIST announced $15 million in funding for ASTM to establish a “centre of excellence” to promote US participation in standards development for critical and emerging technologies, including AI.
The National AI Advisory Committee drafted a transition plan for the next president, primarily focused on helping the US benefit from AI.
The committee said it will also “make a set of recommendations related to continuing to advance AI governance through executive branch and agency action”.
The UK is reportedly going to consult on an "opt-out" model for AI content-scraping, allowing AI companies to use online content unless publishers and artists explicitly opt out.
Relatedly, at this week’s International Investment Summit Keir Starmer chatted with Eric Schmidt about AI, and advocated for “leaning in” to AI rather than over-regulating it out of fear.
The UK announced $8b worth of data centre investment, from CyrusOne, ServiceNow, CloudHQ and CoreWeave.
The UK AI Safety Institute and Gray Swan AI developed AgentHarm, a new dataset for measuring the harmfulness of AI agents.
UK AISI also launched its Systemic AI Safety grants programme.
And ARIA launched a £3.4m funding call for "Safeguarded AI".
The first closed-door workshop on the EU code of practice for general-purpose AI model providers will reportedly take place on October 23, with OpenAI, Google and Meta all invited (among others).
Virginia governor Glenn Youngkin established an AI Task Force, with members including Zach Graves, Samuel Hammond, and Tim Hwang.
The DOD and DHS have spent $700m on AI projects since ChatGPT's launch.
Human rights groups launched legal action against France's use of welfare fraud detection algorithms, claiming they’re discriminatory.
Influence
Keir Starmer's business adviser Varun Chandra reportedly held meetings on AI policy while having a stake in an investment fund that has backed AI companies.
The Alliance for Trust in AI, an opaque trade group run by Venable, asked the Bureau of Industry and Security to reduce the reporting frequency for AI developers from quarterly to annually.
Americans for Responsible Innovation gave Sens. Cantwell and Young and Reps. McCaul and Beyer its first “Responsible AI Champions” award.
The Open Source Initiative criticised Meta for calling its Llama models "open-source”. It’s releasing its own definition of open-source AI next week.
The Open Markets Institute and Mozilla warned that Big Tech companies could dominate AI development, recommending antitrust action and other policies to prevent monopolisation.
Google and IDC released a “playbook” for Chief AI Officers.
Scale AI CEO Alexandr Wang urged Congress to authorise the AI Safety Institute, among other things.
The National Association of Broadcasters criticised the FCC's proposed AI disclosure requirements for political ads.
UK DayOne proposed a comprehensive UK AI industrial strategy to boost national capabilities and compete globally, suggesting that the UK needs a £100b AI national champion by 2030.
Someone launched an AI safety version of the doomsday clock.
Industry
Anthropic updated its Responsible Scaling Policy, adding “new capability thresholds to indicate when we should upgrade our safeguards”, among other things.
It said more details on capability assessments are coming soon.
DeepMind's operating profit increased 91% to $175m last year.
Meanwhile Isomorphic Labs, Alphabet’s drug discovery startup, increased losses to over $75m.
Mira Murati is reportedly recruiting OpenAI employees for a potential new venture.
Google moved its Gemini App team to Google DeepMind.
Google signed an agreement to buy 500 MW of power from Kairos Power's small modular nuclear reactors, while AWS signed a similar $500m SMR deal with Dominion.
The Sierra Club, meanwhile, seems to have quietly reversed its anti-nuclear stance, endorsing nuclear energy as a clean power source for AI.
Crusoe confirmed a $3.4 billion deal to finance an AI data centre in Texas, reportedly for use by OpenAI via Oracle. The facility could use 1GW of power by mid-2026.
Crusoe’s reportedly raising from Founders Fund and Felicis at a $2b+ valuation.
OpenAI pledged to only use its patents defensively, though experts fear it’s just "virtue signalling".
It was a bumpy week for tech stocks: they plunged after ASML cut its 2025 sales forecast, but rebounded after TSMC reported strong AI chip demand and raised its 2024 forecast. Nvidia came close to record highs.
TSMC reportedly plans to build AI chip fabs in Europe.
Dell said it will begin shipping servers with Nvidia's new Blackwell chips next month.
Qualcomm will reportedly decide whether to try to buy Intel after the election.
Mistral AI released two models optimised for edge devices.
Prime Intellect announced a 10B parameter model trained using decentralised methods, claiming it's 10 times larger than previous decentralised efforts.
Dean Ball and Jack Clark both have interesting thoughts on what decentralised training means for AI policy.
Adobe unveiled an AI text-to-video model trained on licensed content.
Sam Altman's Worldcoin rebranded as World.
Moves
Dane Stuckey, formerly of Palantir, joined OpenAI as chief information security officer. He’ll work alongside head of security Matt Knight.
Sebastien Bubeck left Microsoft for OpenAI.
Michael Sayman is joining Meta to work on generative AI.
Olivia Igbokwe-Curry is AWS’s new director of federal affairs and federal AI policy.
Clare Barclay, Microsoft UK's CEO, was appointed to chair the UK's new Industrial Strategy Advisory Council.
Alex Hern now covers AI science and technology for The Economist.
Michael Wooldridge is the first Ashall Professor of the Foundations of Artificial Intelligence at Oxford.
Ignacio Cofone, Emma Curran, Samir Sinha and Gulzaar Barn joined Oxford’s Institute for Ethics in AI.
Several OpenAI researchers reportedly requested team transfers after Liam Fedus was promoted to lead post-training.
OpenAI is hiring research scientists and engineers for various AI safety roles.
Princeton is recruiting fellows for AI placements in US government agencies.
The Alan Turing Institute reportedly launched a redundancy consultation process which could affect up to 140 of its 440 staff.
Best of the rest
Apple researchers found that LLMs struggle with basic maths problems when irrelevant information is added, arguing that suggests they lack true reasoning capabilities.
The New York Times demanded that Perplexity stop using its content.
SAG-AFTRA and video game companies resumed negotiations over AI use in voice acting.
The Washington Post profiled OpenAI's Ben Nimmo, who works on combating AI-enhanced election interference.
Researchers developed an attack that can make LLMs secretly extract users' personal information and send it to hackers.
Dean Ball and Daniel Kokotajlo might disagree on a lot, but in a joint op-ed they both agreed on the importance of transparency in AI development.
Keith Dear explored what the UK Defence Review might look like if it took the possibility of AGI by 2050 seriously.
SemiAnalysis published the first piece in a series on data centre anatomy, focused on electrical systems.
Moody's released two reports examining how AI-driven energy demand will impact energy producers and natural gas transporters.
A new DeepMind study found that AI-mediated deliberation helped groups find common ground.
Nature has a piece on the potential impacts of AI-generated protein designs.
OpenAI studied fairness in ChatGPT responses based on users' names, finding little effect.
A Wired investigation found that the Cybercheck AI tool, which helped convict people of murder, sometimes provides questionable or unverifiable evidence.
Character.AI users can easily create AI chatbots impersonating real people without consent.
Thanks for reading; have a great weekend.