Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
OpenAI simply cannot have a quiet week.
This week saw the sudden departure of three executives: CTO Mira Murati, chief research officer Bob McGrew, and research VP Barret Zoph.
While all their departure statements were very pleasant, there are some signals that things didn’t end amicably with Murati: she informed Sam Altman that she was leaving just a few hours before it was publicly announced; and she retained her own PR advisors.
It’s previously been reported that Murati was one of the executives who complained about Altman to the OpenAI board.
We don’t really know what’s driven the departures. But recent reporting suggests a couple possible reasons:
The Information says that burnout and “turf wars” between executives haven’t helped, with Altman reportedly avoiding making difficult decisions and saddling Murati with them.
The WSJ, meanwhile, says there’s been “chaos and infighting among executives worthy of a soap opera”.
The Journal also notes internal unrest over safety practices, particularly safety testing: GPT-4o’s safety testing was reportedly extremely rushed, to the extent that persuasion capabilities which exceeded OpenAI’s internal standards were only discovered after launch.
Altman, meanwhile, has reportedly been “largely detached from the day-to-day”.
More speculatively, more reporting this week strengthened the argument that OpenAI is looking to ditch its non-profit structure: Reuters reported that the “non-profit will continue to exist and own a minority stake in the for-profit company”, but will no longer have control.
Sam Altman, meanwhile, may get equity in the company. Bret Taylor said “the board has had discussions about whether it would be beneficial to the company and our mission to have Sam be compensated with equity, but no specific figures have been discussed nor have any decisions been made”.
Altman told employees that it wouldn’t be a “giant equity stake”, though, contrary to some outlets’ reporting.
Altman said the restructuring wasn’t the reason for the execs’ exit. But OpenAI’s come in for strong criticism over the rumoured changes nonetheless.
The Atlantic’s Karen Hao: “For the first time, OpenAI’s public structure and leadership are simply honest reflections of what the company has been—in effect, the will of a single person.”
Vox’s Sigal Samuel: “After years of sweet-talking the press, the public, and the policymakers in Congress, assuring all that OpenAI wants regulation and cares more about safety than about money, Altman is not even bothering to play games anymore. He’s showing everyone his true colours.”
Elon Musk: “You can’t just convert a non-profit into a for-profit. That is illegal.”
On which note: it’s a good time to revisit this legal analysis of whether OpenAI can change its legal structure.
And in other OpenAI news:
CFO Sarah Friar reassured investors that the departures wouldn’t affect the company, and said its $6.5 billion funding round will close next week.
Altman published an essay called “The Intelligence Age”, which talks about how great AI will be — and doesn’t even pay lip service to concerns about AI’s risks. He said superintelligence could arrive in “a few thousand days”.
The NYT has a deep-dive on Altman’s plan to build tons of AI data centres, with a particularly good anecdote about TSMC execs calling him a “podcasting bro”.
And Altman reportedly pitched the White House on building 5GW data centres.
The SB 1047 fight entered its final phase, and endorsements came out in force.
Over 125 Hollywood figures — including JJ Abrams, Mark Hamill, and Jane Fonda — urged Gavin Newsom to sign the bill, with Mark Ruffalo saying “it’s only sensible to put a modicum of oversight on a new technology and approach it with some basic guardrails instead of waiting for harm to be done and then do something.”
Former NYC Mayor Bill de Blasio also endorsed the bill.
Some good op-eds on the subject, too:
Jonathan Taplin: “If tech titans are able to successfully leverage their money and power to defeat SB 1047, we’re headed back to a world where they get to define ‘innovation’ as a blank check for tech companies to keep the winnings — and make the rest of us pay for what’s broken.”
Ketan Ramakrishnan: “The most common criticism of SB 1047 is that it would stifle innovation by holding AI companies liable when wrongdoers use their systems to cause harm. Critics seem to think that such liability is a radical new legal invention. They are wrong.”
Newsom has until Monday to make a decision.
The discourse
Joe Biden discussed AI safety in his final UN General Assembly speech:
“AI also brings profound risks, from deepfakes to disinformation to novel pathogens to bioweapons … As countries and companies race to uncertain frontiers, we need an equally urgent effort to ensure AI’s safety, security, and trustworthiness.”
Ivanka Trump has read Situational Awareness:
“Leopold Aschenbrenner’s SITUATIONAL AWARENESS predicts we are on course for Artificial General Intelligence (AGI) by 2027, followed by superintelligence shortly thereafter, posing transformative opportunities and risks. This is an excellent and important read.”
Aspen Strategy Group’s Anja Manuel called for mandatory safety testing of large AI systems:
“In order to reap the full benefits of AI we need immediate, concrete action to create and enforce safety standards — before models cause real world harm.”
Helen Toner thinks we need to regulate AI under uncertainty:
“We don't really have a choice other than to try and find ways of putting in place common sense tests or common sense limits around what we might see … Ideally, we would have super well-grounded, well-validated empirical science and theoretical science to guide this. Given that the technology is charging ahead without that, I think we just sort of do the best that we can.”
Cybersecurity researcher Victor Benjamin says we need to use AI to amp up cyberdefences:
“AI is enabling people with little technological know-how to become cybercriminals … Rather than cage AI, we can fight back by deploying advanced AI cybersecurity tools and updating our defensive strategies to better monitor hacking communities on the darknet.”
Vinod Khosla says he pushed for ousting the OpenAI board:
“I was very vocal that we needed to get rid of those, frankly, EA [Effective Altruism] nuts, who were really just religious bigots.”
Policy
The House Science Committee unanimously approved three AI bills: one on tracking AI incidents, another on DOE AI research, and a third on the development of small modular nuclear reactors for AI data centres.
Reps. Ross and Beyer introduced the House version of the Secure AI Act, which would expand upon the aforementioned incident tracking bill and establish an AI Security Centre.
Sens. Heinrich and Rounds introduced legislation that would use AI to improve pandemic prevention efforts.
The State Department launched the Partnership for Global Inclusivity on AI, in partnership with OpenAI, Anthropic, Google, Microsoft, and others. It has over $100m to promote AI development in developing countries.
Various agencies published their plans to comply with the White House OMB’s AI governance memo.
The DOJ is reportedly investigating Super Micro, after a former employee “accused the company of accounting violations”, according to the WSJ.
The FTC took action against five companies for “operations that use AI hype or sell AI technology that can be used in deceptive and unfair ways”.
Lots of companies, including OpenAI, Microsoft and Google, signed the EU’s voluntary AI Pact. Apple, Meta, and Anthropic did not.
MEPs involved in the AI Act are questioning the European Commission's process for appointing chairs of the code of practice for GPAI model providers.
China drafted new watermarking rules for AI-generated content.
Global AI rules are going to be a focus for Brazil’s G-20 session in November, with President Lula reportedly wanting to “craft a governance framework that includes the interests of Global South nations and forces … China and the US to the table”.
Influence
Speaker Mike Johnson said he’s had conversations with AI companies who are “genuinely concerned that there'll be an overreach by the federal government”, leading him to be wary of any “red tape” or AI regulations.
Yoshua Bengio, Alondra Nelson, and 19 other AI experts signed the Manhattan Declaration, outlining principles for AI development and governance.
It calls for more work on AI alignment, global scientific cooperation, and more research into AI capabilities and risks.
A separate group of AI researchers (including Fei-Fei Li and Meta’s Joelle Pineau) called for “evidence-based AI policy”.
Notably, they endorsed “if-then” commitments, saying that we need a “blueprint for AI policy that maps different conditions that may arise in society (e.g. specific model capabilities, specific demonstrated harms) to candidate policy responses”.
Markus Anderljung put together some sensible suggestions on what UK AI regulation should look like, saying new rules should focus on “telling companies what they need to do, but not mandating how they do it”.
Meta's Nick Clegg criticised the UK’s focus on AI safety, saying “we wasted a huge amount of time going down blind alleys, assuming that this technology was going to eliminate humanity”.
Industry
Anthropic is reportedly projected to hit $1b in annualised revenue later this year. It’s also reportedly fundraising at a $30-40b valuation.
Google released updated Gemini 1.5 models with improved performance and significantly reduced pricing.
Meta released Llama 3.2, its first open-source multimodal AI model with visual capabilities, and added a voice mode with celebrity voices.
It also partnered with Arm to develop Llama 3.2 1B and 3B, designed to be small enough to run on phones.
OpenAI is reportedly tweaking its Sora AI video generation model before release to make it faster and more consistent.
Microsoft launched a "correction" feature for Azure AI Studio that automatically detects and rewrites inaccurate AI outputs.
DeepMind said AlphaChip, its AI chip design tool, has been used to “design superhuman chip layouts” for TPUs.
Qualcomm has reportedly approached Intel about a potential takeover.
Apollo, meanwhile, has reportedly offered to invest up to $5b in Intel.
TSMC and Samsung have reportedly discussed building $100b worth of chip factories in the UAE.
SK Hynix announced mass production of a new 12-layer HBM3E chip.
Scale AI reportedly nearly quadrupled its revenue in the first half of this year, but has struggled with data quality and worker fraud.
Google announced a $120m Global AI Opportunity Fund for worldwide AI education and training, while OpenAI launched the OpenAI Academy for AI developers and organisations in low- and middle-income countries.
Hugging Face surpassed 1m AI model listings.
The Allen Institute for AI released Molmo, a multimodal open-source AI model.
IBM's plan to replace thousands of jobs with AI is reportedly failing because its software isn’t up to the task. Laying off experienced staff just exacerbates the problem.
Indian startup Nurix AI raised $27.5m from Accel and General Catalyst to build custom AI agents for enterprise services.
Sam Hogan launched "inference.net", a “wholesaler of LLM inference tokens” that resells underutilised data centre capacity.
Moves
Microsoft AI chief Mustafa Suleyman has reportedly reshuffled leadership, getting rid of Sébastien Bubeck (who was working on distilled models). Bubeck’s going back to Microsoft Research.
James Cameron joined the board of Stability AI.
UNIDIR announced Yi Zeng as a Senior Fellow.
Lauro Langosco is joining the EU Commission's new AI safety unit, within the AI Office.
Chris MacKenzie joined Americans for Responsible Innovation as senior director of communications.
Louise Matsakis is joining Wired as a senior business editor.
Best of the rest
OpenAI agreed to allow authors suing it to inspect its AI training data.
A deepfake caller reportedly impersonating Ukraine's former foreign minister Dmytro Kuleba targeted Senate Foreign Relations Committee chair Benjamin L. Cardin.
The US said Russia, Iran, and China are using AI to boost anti-U.S. influence campaigns ahead of the November election.
The WSJ profiled Rohit Prasad, who is tasked with fixing Amazon’s AI efforts.
Lennart Heim has a good piece on what o1 and the increasing importance of inference compute might mean for AI governance.
The UK AI Safety Institute published a bunch of learnings from developing Q&A evals.
Epoch AI estimated that power required to train frontier AI models is doubling annually, compared to a 4x per year increase in training compute.
A new paper critiqued the "bigger-is-better" paradigm in AI, arguing it is unsustainable and concentrates power in few hands.
A study published in Nature found that larger AI chatbots are more likely to give wrong answers than admit ignorance.
The Secret Service spent $50k on Microsoft Azure and OpenAI cloud services but won't say why.
Researchers built an AI tool which can solve 100% of Google's reCAPTCHAv2 challenges.
A big new survey found 39% of U.S. adults used generative AI last month, with 28% of employed respondents using it at work — and 11% using it daily.
Thanks for reading; have a great weekend.