Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
OpenAI raised $6.6 billion at a $157 billion valuation — the biggest VC funding round in history.
The round was reportedly led by $1.3b from Thrive Capital. Microsoft invested $750m, SoftBank $500m, Tiger Global $350m, Coatue $250m, and Altimeter “at least $250m”. Fidelity, Nvidia, Khosla Ventures, Quiet Capital and MGX also invested.
A few interesting details from the round:
Thrive reportedly has the option to invest a further $1b at a $150b valuation until the end of next year. (Other investors are reportedly annoyed Thrive got that special term.)
OpenAI reportedly asked investors to no longer invest in Anthropic, xAI, SSI, Perplexity and Glean. (Notably, it doesn’t seem to care about Mistral or Cohere.)
In addition to the $6.6b in cash, OpenAI reportedly secured a $4b revolving credit line, with an option to increase that by a further $2b. The interest rate is reportedly SOFR+100bps, which currently sits at around 6%.
And the deal terms reportedly require OpenAI to become a for-profit business in two years — else the funding becomes debt.
That last point could prove tricky: the WSJ and Business Insider both reported recently on the legal challenges of abandoning OpenAI’s current non-profit structure, which will require the for-profit entity to buy out the non-profit’s assets.
Then there are the reputational challenges: former employees and advocacy groups raised concerns about the change this week.
The NYT also reported on OpenAI’s financials:
It reportedly has about $300m in monthly revenue, and projected annual revenue of $3.7b this year. It forecasts revenue of $11.6b next year, and $100b by 2029.
$2.7b of this year’s revenue will come from ChatGPT, and the company plans to raise the price to $44 a month by 2029.
Interestingly, this suggests just 27% of revenue is coming from the API — compared to an estimated 85% of Anthropic’s revenue.
Losses, meanwhile, are expected to be $5b this year.
Gavin Newsom vetoed SB 1047.
My thoughts on the matter are in this piece: in short, I think the veto demonstrates that Silicon Valley and VCs still hold huge influence over the Democratic Party.
Newsom said he vetoed the bill, at least in part, because it didn’t regulate smaller models, which might be equally dangerous. As many have said, that makes no sense, and is very clearly not why he vetoed the bill.
Here are some reactions to the veto:
Nancy Pelosi: “Thank you, [Governor Newsom], for recognizing the opportunity and responsibility we all share to enable small entrepreneurs and academia – not big tech – to dominate”
OpenAI’s Miles Brundage: “Newsom’s letter says it is *bad* there’s a carveout for small models (which was intended as a proxy for small companies). Regardless of your views on the bill, CA Democrats do not seem to be trying particularly hard to coordinate + show there was some principle here.”
Scott Wiener: “I’ve never had a bill with this level of misinformation … there was a whole propaganda campaign.” Wiener specifically cited Andreessen Horowitz, Y Combinator, and Fei-Fei Li for spreading false claims, as Transformer has previously reported.
Lorena Gonzalez: “Refusing to regulate big tech and AI is going to be the Democrats next NAFTA.”
Dean Ball: “It was a sweeping bill … Governor Newsom is therefore wise to have vetoed [it]; at the end of the day, it was simply biting off more than it could chew.”
Brad Carson: “The cry for federal rulemaking on these issues is louder than ever.”
Samuel Hammond: “Instead of focusing on frontier models where the risk is greatest, Newsom wants a bill that covers *all* AI models, big and small. Opponents of SB1047 will regret not accepting the narrow approach when they had the chance.”
Martin Casado: “Had to mass unmute/unblock a bunch of EA folks just to enjoy their lamentations for a day.”
Newsom’s veto message did say that “we cannot afford to wait for a major catastrophe to occur before taking action to protect the public”, and suggested he may support more use-based AI regulation in future. (Such an approach would struggle to address the catastrophic risks 1047 was designed to prevent, though.)
He has established a committee of experts to work on developing “workable guardrails” and “an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks”.
The committee is made up of Fei-Fei Li, Tino Cuéllar, and Jennifer Tour Chayes.
Newsom did sign a bunch of other AI bills into law, most notably several tackling AI-generated CSAM and one requiring developers to disclose their training data sources.
A federal judge blocked an election deepfakes bill, though.
The discourse
Federal Reserve Governor Lisa Cook said AI might soon affect productivity:
“I anticipate an acceleration in productivity grounded in the impressive advances in AI, but substantial uncertainty attends that forecast.”
Partnership on AI CEO Rebecca Finlay praised the UK’s approach to AI regulation:
“The UK. has to be commended for getting out in front of this with the UK AI Safety Summit, and really catalysing this … it has been very, very heartening to see that the policy community is starting to understand that they need to attend to this.”
French AI Summit boss Anne Bouverot has a rather different perspective:
“The global discourse on AI has already changed … We hear much less about the existential risks of AI, or so-called high risks. We hear about a potential bubble. We hear about what the latest developments are. That’s part of what we’re trying to do, to help change that discourse. So maybe it’s less fascinating, but it’s more concrete.”
Former OpenAI employee Carroll Wainwright is worried about Sam Altman:
“It is exceedingly clear that [Altman] is a danger to OpenAI's non-profit mission … When I left, I told executives that it was going to be very hard to recruit and retain top talent because people do not trust Sam. I stand by this.”
Susan Ariel Aaronson doesn’t like AI nationalism:
“AI nationalism may seem appropriate given the import of AI, but … AI nationalistic policies may backfire and could divide the world into AI haves and have nots.”
Policy
The EU AI Office announced chairs and vice-chairs for its working groups drawing up the GPAI Code of Practice. Here’s who was chosen, with chairs listed first.
Transparency rules: Nuria Oliver and Rishi Bommasani.
Copyright rules: Alexander Peukert and Céline Castets-Renard.
Risk identification and assessment, including evals: Matthias Samwald, Marta Ziosi, and Alexander Zacherl
Technical risk mitigation: Yoshua Bengio, Daniel Privitera, and Nitarshan Rajkumar.
Internal risk management/governance: Marietje Schaake, Markus Anderljung, and Anka Reuel.
The US AISI asked for comment on mitigating chemical and biological risks from dual-use AI models.
NIST is launching a $100m R&D project on AI usage in semiconductor manufacturing.
The White House OMB released guidance for federal agencies on purchasing AI tools. It tells agencies to share information on AI acquisition within government, consider interoperability, and “prevent vendor lock-in”.
UK AI minister Feryal Clark backtracked on new copyright legislation for AI training, saying the government is working on "a way forward" that may not involve legislation.
Malaysia announced plans for AI regulations. Google is investing $2b in a new data centre there.
Influence
Control AI published “A Narrow Path”, a policy plan to tackle AI risks. To ban development of artificial superintelligence in the next 20 years, it proposes three licensing regimes and “prohibitions of certain research directions”. It does not seem very tractable.
SAP CEO Christian Klein warned against over-regulating AI in Europe, calling for an “outcome” based regulatory regime instead.
CSET published reports from workshops on how to prepare for AI agents and how to secure critical infrastructure from AI threats.
RAND warned that China's military is interested in using generative AI for expanded disinformation campaigns.
Mozilla called for developing a "public AI" ecosystem to counter market concentration and promote safety standards.
Industry
OpenAI rushed the release of o1 despite staff concerns over safety, Fortune reported.
OpenAI announced a Realtime API for voice responses, and vision fine-tuning capabilities. It also launched Canvas, a new UX for ChatGPT similar to Claude’s Artifacts.
Google is reportedly working on an o1 competitor.
Microsoft added vision and voice capabilities to Copilot.
Google released Gemini 1.5 Flash-8B, which is 50% cheaper than 1.5 Flash and has double the rate limits.
Google also introduced ads to AI Overviews, while Microsoft launched its own AI search tool.
ByteDance is reportedly planning to train a new model primarily using Huawei Ascend 910B chips. ByteDance denied the reports.
BioNTech is building an AI lab assistant on top of Llama 3.1. DeepMind is working on something similar.
Cerebras filed for an IPO, where it hopes to raise $600m. Its SEC filing discloses that more than 80% of its revenue comes from G42.
Character.ai has given up on building its own AI models, citing training costs.
Liquid AI launched non-transformer AI models that supposedly outperform Llama 3.1-8B and Phi-3.5 3.8B.
Nvidia released its NVLM 1.0 open-source AI models, which it says rival GPT-4o on some tasks.
Black Forest Labs released FLUX1.1 [pro], a faster and higher-quality image gen model, alongside a beta API.
Pika launched version 1.5 of its AI video generation tool.
11x.ai, which makes AI sales reps, reportedly raised $50m at a $350m valuation, led by Andreessen Horowitz.
Moves
Tim Brooks, who led Sora research at OpenAI, is joining Google DeepMind “to work on video generation and world simulators”.
Durk Kingma, an OpenAI co-founder, left Google for Anthropic.
Mia Glaese is OpenAI's new head of alignment research, according to OpenAI employee ‘roon’.
This Information piece is a good roundup of who’s who at OpenAI now. No CTO will be appointed to replace Mira Murati imminently, Sam Altman reportedly told employees.
Ollie Stephenson joined FAS as associate director of AI and emerging technology policy. He previously worked in Sen. Markey's office.
Camilla de Coverly Veale is joining the Mozilla Foundation as head of UK public affairs. She used to work at the Startup Coalition.
Edward Emerson is techUK’s new head of digital regulation.
xAI has moved into OpenAI’s old office.
Best of the rest
SaferAI assessed AI companies’ risk management practices, finding that xAI’s is particularly bad — but none are great.
Atoosa Kasirzadeh identified six key measurement challenges in AI safety frameworks.
The Forecasting Research Institute launched ForecastBench, a new benchmark for evaluating AI forecasting capabilities. It found AI models are not yet superhuman
Scientists are pretty impressed with o1, according to Nature..
There’s been lots of concern about Hurricane Helene’s impact on the semiconductor supply chain. I looked into it and found that the fears are likely overstated.
The Atlantic has an interesting piece on how AI chatbot transcripts are a gold mine for ad targeting.
And in the LRB, James Vincent explores the growing popularity of AI girl- and boyfriends.
Thanks for reading; have a great weekend.