Transformer Weekly — Nov 1
The Chinese military weaponises Meta's AI, Anthropic outlines its policy views, and AI sales keep soaring.
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
Chinese researchers used Meta’s open-weights Llama 2 model to build an AI tool for military use, Reuters reported.
One tool, built on top of Llama 2, is a “military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making”.
Two of the researchers on the project work for China’s Academy of Military Science, the Chinese military’s research body.
Reuters also reported that the Aviation Industry Corporation of China used Llama 2 for “the training of airborne electronic warfare interference strategies”.
Meta public policy director Molly Montgomery told Reuters that “any use of our models by the People's Liberation Army is unauthorised and contrary to our acceptable use policy”.
But as Meta knows, those policies are impossible to enforce if the model’s weights are freely available.
There’s no suggestion that these tools are particularly good or useful — they seem not to be. But it’s the clearest sign yet that, as long as Meta releases the weights of its frontier models, the Chinese military will try to weaponise them.
Anthropic called for governments to act fast to regulate the potential catastrophic risks from AI. It’s worth reading in full, but here’s the TL;DR:
“Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast … Surgical, careful regulation will soon be needed.”
Motivating this urgency, Anthropic says, is the pace of AI progress: “About a year ago, we warned that frontier models might pose real risks in the cyber and CBRN domains within 2-3 years … we believe we are now substantially closer to such risks.”
The company gives governments some specific advice, all of which strikes me as pretty sensible:
“require companies to have and publish RSP-like policies … [and] a set of risk evaluations”, as well as having “some mechanism to verify the accuracy of these statements”.
“incentivise companies to develop RSPs that are effective at preventing catastrophes” (with the caveat that “any [such] mechanism should be flexible”)
Governments should “not impose burdens that are unnecessary or unrelated to the issues at hand”.
Relatedly: the next wave of AI models is on the horizon.
Meta announced it’s training Llama 4 on a cluster of over 100,000 H100s, and that it plans to release the model early next year.
Google reportedly plans to announce Gemini 2.0 next month — though apparently the model “isn’t showing the performance gains the … team had hoped for”.
Google also plans to preview an AI agent called Project Jarvis that can take over web browsers to complete tasks, The Information says.
The Verge also reported that OpenAI plans to release “Orion”, its next frontier model, by the end of the year.
Sam Altman has denied that — and said that nothing called GPT-5 is coming this year either. But he’s also said that there are some “very good releases coming later this year”.
Interestingly, Altman attributed the release timeline to compute constraints:
“We are prioritising shipping o1 and its successors. All of these models have gotten quite complex and we can't ship as many things in parallel as we'd like to. (We also face a lot of limitations and hard decisions about we allocated our compute towards many great ideas.)”
The discourse
Vinod Khosla wants Scott Wiener to butt out of AI:
“He’s clueless about the real dangers [of AI], which are national security issues … This is a global national security issue. He’s not qualified to have an opinion there.”
James Cameron is worried about AGI:
“You'll be living in a world that you didn't agree to, didn't vote for, that you are co-inhabiting with a super-intelligent alien species that answers to the goals and rules of a corporation … That's a scarier scenario than what I presented in 'The Terminator' 40 years ago, if for no other reason than it's no longer science fiction. It's happening.”
Thomas Friedman thinks Kamala Harris is better suited for president — because she’ll do a better job on AGI:
“This election coincides with one of the greatest scientific turning points in human history: the birth of artificial general intelligence, or AGI, which is likely to emerge in the next four years and will require our next president to pull together a global coalition to productively, safely and compatibly govern computers that will soon have minds of their own superior to our own.”
Peter Kyle said the need for AI safety laws is obvious:
“If there is a model that makes its way into public and it leads to a wide societal harm or damage to national security, someone’s gonna have to hold the can for that … Imagine being the government that says, ‘I understood what the capacity for harm was of these models, but I thought it OK to leave it to them to deal with it.’”
Policy
Lots of news of export control violations this week.
TSMC reportedly suspended shipments to at least two chip companies suspected of circumventing US export controls on Huawei.
An Indian pharmaceutical company reportedly exported Dell servers containing Nvidia H100s to Russia.
And a new report by SemiAnalysis detailed all sorts of loopholes that Huawei and SMIC are using to get around export controls.
Speaking of China… the Treasury Department issued new rules to limit US investment in Chinese AI companies.
This was a bit over a week ago, but worth highlighting: the White House released a National Security Memorandum on AI.
CNAS has a great, detailed analysis of the memo.
Of particular note is that it explicitly calls for the AI Safety Institute to test models for “risks relating to cybersecurity, biosecurity, chemical weapons, system autonomy, and other risks as appropriate (not including nuclear risk, the assessment of which shall be led by DOE)”.
The Biden administration announced an $825m investment in a new semiconductor R&D facility focused on improving EUV technology.
US Africa Command purchased OpenAI tools through Microsoft for military operations, the Intercept reported, marking the first confirmed use of OpenAI tech by a US combat command.
The Delaware AG reportedly asked OpenAI for more information on its transition to a for-profit, saying that “the current beneficiaries of OpenAI have an interest in ensuring that charitable assets are not transferred to private interests without due consideration”.
Last week, Transformer asked whether the transition will treat the non-profit fairly.
The WaPo has a good overview of the race to replace Rep. Anna Eshoo. Both candidates opposed SB 1047.
Texas Rep. Giovanni Capriglione unveiled a draft AI regulation bill. It seemingly emerged from the Future For Privacy Forum, and looks pretty bad: neither Dean Ball nor Zvi Moshowitz like it.
The EU said it will review Nvidia's proposed $700m acquisition of Run:ai.
The UK's Competition and Markets Authority launched a formal investigation into Google's Anthropic partnership.
The UK will reportedly make a “cluster of AI announcements” in November.
The FT says that’ll include Matt Clifford’s “AI Opportunities Action Plan”, which will reportedly call for the government to “reduce the cost and complexity of visas” for AI experts.
I expect it’ll also include the long-awaited AI Bill consultation.
The UK AI Safety Institute team visited the Beijing Institute of AI Safety and Governance.
Influence
The Open Source Initiative released a new definition for open-source AI. Meta, whose models do not meet the definition, does not like it.
RAND released a government-commissioned global catastrophic risks report, which discusses AI risk.
A poll by the Center for Youth and AI found that 80% of US teens believe addressing AI risks should be a top priority for lawmakers.
Bloomberg profiled Luther Lowe, who is doing a post-1047 victory lap.
The Information Technology Industry Council held a launch party for the new Congressional Staff Association on AI. Attendees reportedly included lobbyists from CSA.ai, Cognizant, IBM and Broadcom, and staffers for Sen. Rounds and Reps. Beyer and Jeffries.
Meta said it is working to get its Llama models adopted across the US government, including the State Department and Department of Education.
EPIC filed a complaint with the FTC calling for an investigation of OpenAI. It’s accusing the company of failing “to show evidence that its products meet even the few responsible development and use standards put forth”, including the Executive Order.
Industry
A big week of tech earnings showed that AI is growing fast.
AWS revenue was up 19% YoY, driven by generative AI demand. Andy Jassy said the company’s AI business is “is a multibillion-dollar revenue run rate business that continues to grow at a triple-digit year-over-year percentage, and is growing more than three times faster at this stage of its evolution as AWS itself grew”.
Microsoft said its own AI business would hit $10b in annual revenue next quarter. It will be its fastest product category to hit that milestone.
And Google’s cloud revenue was up 35% YoY, also driven by AI demand.
Meta, meanwhile, said AI recommendation and ad-targeting algorithms are significantly increasing engagement time on its apps. It also said it’ll spend at least $38b on capital expenditures this year, and even more next year.
AMD’s earnings were less rosy, despite raising its AI chip revenue forecast to $5b for the year.
Huawei reported significantly decreased net profit, which Bloomberg says suggests “the company may be struggling with its chipmaking yield”.
And Samsung’s semiconductor earnings plunged, adding to the company’s many woes.
xAI is reportedly raising $5b at a $45b valuation. The round will reportedly be led by Valor, with Sequoia participating. Elon Musk has also approached investors from Qatar and Saudi Arabia to invest, the FT says. The UAE’s MGX is reportedly interested, too.
OpenAI and Broadcom are reportedly developing a custom AI inference chip, to be manufactured by TSMC. OpenAI’s reportedly not bothering setting up its own chip fab for now.
You can now use Anthropic and Google models with GitHub Copilot (which is, of course, owned by Microsoft).
Relatedly, OpenAI board members and investors reportedly told The Economist that they think OpenAI should stop being exclusive to Azure — because they want its models to be sold on AWS.
Arm wants to build AI accelerators to rival Nvidia.
Apple launched Apple Intelligence this week, and said it will roll out the features to EU users starting in April.
The new AI-enhanced Alexa is reportedly delayed due to technical challenges.
LinkedIn launched an AI agent to automate recruitment tasks.
KKR and Energy Capital Partners are investing $50b in data centres and power projects to support AI development.
Waymo raised $5.6b at a reported $45b+ valuation.
Sierra raised $175m at a $4.5b valuation.
Emad Mostaque, Stability AI's former CEO, relinquished his controlling stake in the company.
Moves
On Transformer: Anthropic hired Kyle Fish as its first full-time “AI welfare” researcher.
He joined the company last month to explore whether we might have moral obligations to AI systems.
The news came as researchers released a major new report arguing that it is time for AI companies to start taking the possibility of AI welfare seriously.
Matt Cronin, formerly of the House China committee, joined a16z as senior national security adviser.
Jonathan Cannon joined Meta as a public policy manager. He was previously at the R Street Institute.
Iason Gabriel was promoted to senior staff research scientist at Google DeepMind, leading a new team focused on frontier questions in AI ethics.
Julian Schrittwieser announced he is joining Anthropic after 10 years at Google DeepMind.
Former OpenAI board member and US Representative Will Hurd joined CHAOS Industries as chief strategy officer.
Best of the rest
Researchers found that OpenAI's Whisper AI transcription tool frequently invented text in medical settings.
404 Media found that Microsoft failed to retire an AI gender detection tool it had promised to get rid of.
FAR.AI researchers found that a small amount of poisoned data introduced in fine-tuning can make GPT-4o answer “virtually any” harmful question.
A new report looks at the various Chinese institutions operating as pseudo-counterparts to the US and UK AI Safety Institutes.
GovAI published a new paper on “safety cases for frontier AI”.
Google DeepMind open-sourced its AI text watermarking tool.
Bloomberg profiled Inception, G42’s research arm which is focused on developing AI products for countries in the “Global South”.
AI models produce less accurate responses to election questions in Spanish compared to English.
A study found judges selectively use pretrial risk assessment algorithms, seemingly to justify decisions they already want to make.
Universal Music Group partnered with AI music company Klay Vision to develop AI music models that respect copyright and artist rights.
Meta signed a multi-year deal with Reuters to use its content for AI chatbot responses.
The AI boom could generate up to 2.5m metric tons of additional e-waste annually by 2030, a new study found. That’s the equivalent to discarding 13b iPhones each year.
A UK man was jailed for 18 years for using AI to create child sexual abuse images.
Thanks for reading; have a great weekend.