Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
Republicans once again dominated the news:
Donald Trump picked JD Vance as his running mate, prompting jubilation among anti-regulation AI folks.
Vance has previously come out in support of open-source AI, and said he’s worried that pushes for AI regulation are a form of regulatory capture. The NYT has a good overview of his stance.
Peter Thiel, David Sacks and Elon Musk apparently lobbied Trump to pick Vance.
Elon Musk, Marc Andreessen and Ben Horowitz also came out as Trump supporters/donors this week.
Andreessen and Horowitz specifically cited Trump’s stance on AI as one of their reasons for backing him.
And the Washington Post got hold of an AI executive order being drafted by the Trump-linked America First Policy Institute.
The EO would establish “Manhattan Projects” for military use of AI, review “unnecessary and burdensome regulations”, and create “industry-led” agencies for model evals.
Labour mentioned AI regulation in this week’s King’s Speech — but only tentatively.
The government (well, the King reading the government’s speech), said it will “seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”.
A specific AI bill wasn’t mentioned. The FT reports that, even though DSIT has a draft bill ready to go (which would “[mandate] government access to the most powerful generation of new AI models for testing and evaluation”), the department was “squeezed out” because it already had two other (non-AI) bills in the speech.
What does this all mean? Legislation is coming, but it’s probably not imminent. And exactly what shape it’ll take is still to play for: the FT says there’ll be a “consultation process on what this legislation will include”.
On Transformer: a Meta-funded group is flooding Facebook with anti-AI regulation ads.
Between February and June, the American Edge Project spent $150,000-$190,000 on Facebook ads warning of impending threats to the US if the country regulates artificial intelligence, according to data from Meta’s Ad Library analysed by Transformer.
Across over 150 ads, two arguments recur: AI regulation will harm small businesses, China will overtake the United States in AI.
American Edge received $38 million from Meta in the 2019 and 2020 fiscal years, according to tax records and reporting from CNBC.
It received a further $47.5 million in the 2021-22 fiscal year, according to its most recent tax return, though the source of that donation is not known: the group’s legal structure allows it to keep donors anonymous.
American Edge has long advocated against tech regulation on Meta’s behalf, playing a particularly notable role in antitrust debates. In 2022, the Tech Transparency Project called American Edge “Facebook’s anti-regulatory attack dog”.
The discourse
Sen. John Thune said Republicans want AI legislation:
“If we get the majority, then I think for sure we’ll be doing something in that space.”
Arati Prabhakar said AI companies aren’t yet reporting safety test results to the White House:
“[The EO contained a] requirement to report once a company is training above a particular compute threshold, and I am not aware that we’ve yet hit that threshold. I think we’re sort of just coming into that moment.”
DeepSeek founder Liang Wenfeng gave an in-depth overview of the company in a recent interview:
“The problem we face has never been money, but the ban on high-end chips.”
Air Street Capital published a good overview of the state of Chinese AI:
“A small handful of Chinese labs are producing strong models that are highly competitive with the second-most powerful tier of models produced by US frontier labs. On specific sub-tasks, their performance matches the US state of the art.”
Chevron CIO Bill Braun isn’t too hot on the prospects of generative AI — even Copilot:
“We’re a little dissatisfied with our ability to know how [well] it’s working.”
CSET’s Dewey Murdick and Owen J. Daniels said the Loper Bright ruling doesn’t have to torpedo AI regulation:
“The Loper Bright decision, while challenging existing regulatory approaches, presents an opportunity to create a more agile, distributed, and innovation-friendly governance environment for AI.”
Policy
The Biden administration is reportedly considering imposing the foreign direct product rule to stop ASML and Tokyo Electron shipping products to China.
OpenAI whistleblowers filed a complaint with the SEC alleging that the company illegally stopped employees from warning regulators about various issues, including securities violations.
Meta said it won’t release future multimodal models in the EU “due to the unpredictable nature of the European regulatory environment”.
The FTC is reportedly looking into Amazon’s hiring of Adept executives. The UK Competition and Markets Authority, meanwhile, is formally looking into Microsoft’s hiring of Inflection executives.
The Cyber Administration of China “requires companies to prepare between 20,000 and 70,000 questions designed to test whether [their AI] models produce safe answers”, the WSJ reported, while the FT said CAC’s testing whether models “embody core socialist values”.
Bloomberg has a timely profile on how CAC works.
The UN plans to launch a bunch of AI programs and a global AI office to “fill gaps and bring coherence to the fast-emerging ecosystem of international AI governance responses”, Politico reported.
It sounds like the UN wants to wrest control from the G7, which has been leading global governance conversations so far.
Reps. Obernolte and Stevens introduced the EPIC Act, which would establish a non-profit foundation to fund NIST (giving it “increased access to private sector and philanthropic funding”).
The Department of Energy released a roadmap for its “Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative”.
It would build new AI supercomputers and “provide insight into properties of AI systems at scale and promote safety, security, trustworthiness, and privacy”.
Sens. Manchin and Murkowski recently introduced an act that would authorise the roadmap.
FCC Chair Jessica Rosenworcel proposed a rule that would require robocalls to disclose use of AI.
The US launched an initiative to build semiconductor production capacity across the Americas.
Influence
Pirate Wires reported that Dan Hendrycks, one of the lead advocates for SB 1047, is the co-founder of Gray Swan, a new company which will “help companies assess the risks of their AI systems and safeguard their AI deployments from harmful use” — and so is seemingly well-suited to benefit from SB 1047’s compliance requirements.
Hendrycks said the article was “full of misrepresentations and errors”, and denied that Gray Swan is an “AI safety compliance” company, saying that “it is neither the intention nor the business model for Gray Swan to offer audits of the type that SB 1047 will require”.
On another note, Gray Swan’s first model (which it said was “designed to counter the most potent attacks”) has already been jailbroken.
Industry
OpenAI released GPT-4o mini, a very cheap but still pretty capable model. It’s 33x cheaper than GPT-4o for input and 25x cheaper for output.
The WSJ has a piece explaining all the demand for small AI models.
OpenAI’s Q* project is now known as “Strawberry”, Reuters reported. The company reportedly plans to use the technique, which improves models’ reasoning abilities, to conduct AI research.
Strawberry is reportedly similar to the Self-Taught Reasoner (STaR) technique developed at Stanford. STaR creator Noah Goodman said that if OpenAI are working on this, “that is both exciting and terrifying”.
OpenAI has reportedly talked to Broadcom about designing its own chip, and has hired some of Google’s TPU team. It sounds like Sam Altman’s pressing ahead with his plan to build lots of giant data centres, too.
OpenAI, Anthropic, Google, Microsoft and others launched the Coalition for Secure AI, which aims to establish “standardised” practices that enhance AI security”.
TSMC earnings beat forecasts, and warned that supply will remain tight through 2025.
Google Cloud and Microsoft Azure give Chinese companies access to Nvidia H100 chips, The Information reported.
Anthropic and Menlo Ventures launched a $100m fund for AI startups. (Importantly, it’s not new capital.)
The FT’s got a good piece on employee unrest at Samsung, which is struggling to catch up to SK Hynix on AI chip memory.
Fujitsu invested in Cohere; the two companies plan to develop a Japanese LLM.
Nvidia bought Brev.dev, which is building “the easiest way for AI/ML developers to use a GPU”.
Mistral released new code-generating and maths-reasoning models. It also partnered with Nvidia to release Mistral-NeMo, a 12B parameter model for businesses to run locally.
Fei-Fei Li has started a company called World Labs, according to the FT, which has already raised over $100m at a $1b+ valuation. Andreessen Horowitz is an investor, the FT reports.
Nearfield, which makes equipment for chip manufacturers, raised $148m.
Saudi Aramco’s venture unit invested in AIXplain, a small agentic AI startup.
Moves
Drew Hudson is TechNet’s new general counsel and director of federal policy. He previously worked for Sen. Tom Cotton.
Kenrick Cai joined Reuters as an Alphabet and AI reporter.
Kari Paul is leaving The Guardian.
Jonas Schuett joined the OECD Expert Group on AI Risk and Accountability.
Sonia Joseph is joining Meta as a visiting AI researcher.
The Institute for Law & AI is hiring for lots of roles.
Best of the rest
An investigation found that Apple, Anthropic and Nvidia (among others) trained AI models on a database of YouTube Subtitles put together by EleutherAI. Many of the creators whose work features in the dataset are very mad.
Open Philanthropy’s put out a request for proposals for AI governance projects.
A paper from OpenAI’s now defunct Superalignment team details a method for advanced models to “generate text that weaker models can easily verify”.
A new paper found that “implicit meta-learning” may lead LLMS to trust more reliable sources. David Krueger thinks this “may be the first evidence for the existence of a mechanism by which sufficiently advanced AI systems would tend to become agentic”.
About 20% of Americans think current AIs are sentient.
A DoJ prosecutor said AI is making it much harder to tackle child abuse.
Wired’s got an overview of AI-powered coding agents.
The NYT has a piece on the relationship between the tech industry and universal basic income.
Jeff Dean said AI isn’t a major driver of emissions, yet.
Timnit Gebru is writing a “memoir and manifesto”.
The FT’s got a fun piece on how Taiwan’s AI workforce is getting very rich — which is in turn boosting the country’s property and entertainment sectors.
The FTC shut down an app that claimed it could detect STIs from dick pics.
Have a great weekend; see you next week.