Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
The UK’s Labour Party (which is almost certain to win next month’s elections) set out its AI priorities.
Its manifesto includes a few concrete promises: “binding regulation” on the companies developing the most powerful models, a ban on deepfake creation, and making it easier to build data centres.
There’ll also be a new Regulatory Innovation Office to “help regulators update regulation, speed up approval timelines, and co-ordinate issues that span existing boundaries”.
In a speech, shadow science & tech minister Peter Kyle went a bit further, saying Labour would put AISI on “a statutory footing” and “legislate to require the frontier AI labs to release their safety data”.
Politico has a good piece on who’s shaping all this: most notably, it reports that Kirsty Innes is “effectively writing the party’s AI policy”.
Apple announced lots of new AI features, which it’s calling “Apple Intelligence”.
Despite much of the media focus going to Apple’s partnership with OpenAI, the vast majority of the new features will be powered by Apple’s own models.
Users will only be prompted to send their questions to ChatGPT if Apple’s models can’t handle that themselves. Apple reportedly isn’t paying OpenAI for this.
It sounds like other models, such as Gemini, will get this level of integration soon too.
It published a bunch of info on two of its models, including some responsible development principles. It said it’s “actively conducting both manual and automatic red-teaming with internal and external teams to continue evaluating our models' safety”.
Apple’s advantage, obviously, is deep integration of AI with your other apps and data. Investors seem to think this is a killer feature: Apple’s stock soared, and it briefly regained the title of the world’s most valuable company.
Logan Paul interviewed Donald Trump about AI (a baffling sentence to write).
“It is a superpower, and you want to be right at the beginning of it, but it is very disconcerting. I said, you know, you used the word alarming. It is alarming.”
Trump’s very worried about deepfakes, particularly the idea that someone might deepfake the president announcing the launch of nuclear missiles and other countries not being able to tell if that’s fake.
On this, he confirmed he’s getting advice from Elon Musk about AI.
He’s worried about China, obviously: “We have to be at the forefront. It's going to happen. And if it's going to happen, we have to take the lead over China.”
He’s aware of AI’s electricity needs, and says America needs to expand electricity production as a result.
He’s aware of takeover risks, but didn’t take a stance on if he’s worried about them.
“You know, there are those people that say it takes over the human race. It's really powerful stuff, AI. So let's see how it all works out.”
Oh, and he called superintelligence “super duper AI”.
The discourse
Tim Cook’s aware of the risks of AI:
“I don’t have my head stuck in the sand. I know that there’s also a parade of horribles that can occur, which is why we’re committed to being thoughtful in the space.”
Omar Al Olama said America could trust the UAE, but maybe not the rest of the Middle East:
“I think concerns about chips coming to the Middle East and going to China are valid concerns for any country that has adversaries.”
A former OpenAI employee told Sam Altman that he doesn’t trust him anymore:
“You often talk about our responsibility to develop AGI safely and to distribute the benefits broadly … How do you expect to be trusted with that responsibility when you failed at the much more basic task [of not threatening] to screw over departing employees?”
On Transformer: I responded to Jack Clark’s GPT-2 reflections, and argued that his policy proposals don’t go far enough:
“We’re relying on companies to keep their commitments, and when profit incentives push against society’s interests, that’s a dangerous place to be.”
Lawrence Lessig is very worried about AI risk:
“As a handful of companies race to achieve AGI, the most important technology of the century, we are trusting them and their boards to keep the public’s interest first. What could possibly go wrong?”
Cohere CEO Aidan Gomez said we need to do lots of work before we can make “completely out of the loop”:
“We need to have much more trust and controllability and the ability to set up those guardrails so that they behave more deterministically.”
The Internet Watch Foundation’s Dan Sexton said open source AI is driving abuse:
“As soon as these things were open sourced, that’s when the production of AI generative CSAM exploded.”
A Lucidworks study found that business leaders aren’t so excited about generative AI anymore:
“Unfortunately, the financial benefits of implemented projects have been dismal.”
Daniel Jeffries said SB 1047 is bad:
“It’s likely to destroy California's fantastic history of technological innovation.”
Policy
Later today the G7 is expected to issue a communique saying the countries will work together to “set up a safety certification system” for AI companies.
Sens. Gary Peters and Thom Tillis introduced legislation which would require government agencies to “assess and address the risks of their AI uses prior to buying and deploying the technology”.
It’s reportedly due for markup in the Senate Homeland Security and Governmental Affairs Committee “this summer”.
The US is reportedly considering taking measures to stop China from using gate all-around architecture in its chips.
Rep. Nancy Mace invited Scarlett Johansson to testify to Congress about the OpenAI fiasco.
The FTC said it’ll hold companies accountable for false promises about what their AI tools can do.
Connecticut Sen. James Maroney said he’ll reintroduce his AI regulation bill next year.
Politico noted that the DOJ/FCC AI investigations might fall apart if Trump wins the election.
The EU AI Act’s reportedly been delayed a bit; the rulebook will now be published in July and come into force in August.
Amazon and Mistral joined the EU Internet Forum.
Antonio Guterres said AI must be kept out of nuclear launch decisions.
Influence
Brad Smith testified to Congress, apologising for Microsoft’s cybersecurity lapses and promising to be better going forward. Security will now be “more important even than the company’s work on artificial intelligence”.
The Washington AI Network held an event with Mira Murati and Anna Makanju.
A16z’s continuing its anti-SB-1047 lobbying drive.
TechNet warned against California regulating AI, saying that the industry “doesn’t have to be” in California.
Tech lobbying in states surged to $13.4m last year, three times more than in 2022.
Industry
OpenAI’s annualised revenue has reportedly reached $3.4b, double what it was six months ago. Only $200m of that comes from sales through Microsoft.
Microsoft reportedly wants to swap some of its AI products from running on OpenAI’s models to running on its own models.
Microsoft will rent Oracle cloud servers on behalf of OpenAI. The deal “enables OpenAI to use the Azure AI platform on OCI infrastructure for inference and other needs”, but all “pre-training of [OpenAI’s] frontier models” continues to happen on Microsoft infrastructure.
Mira Murati, talking about OpenAI’s slow takeoff strategy, said the company’s internal models aren’t “that far ahead” of what the public has access to.
Elon Musk dropped his OpenAI lawsuit.
Mistral raised €600m at an “almost” €6b valuation, led by General Catalyst.
Fortune has a great deep dive into how Amazon squandered its Alexa lead.
It sounds like Amazon doesn’t have the compute or data needed to train a frontier LLM, and internal politics are making everything difficult.
Its “Olympus” model, which was reported to have 2T parameters, is actually just 470B.
This week, Amazon said it’ll spend $230m on generative AI startups, with $80m of that going to its AWS Gen AI Accelerator programme.
Ant Group spent $2.9b on R&D last year, with a chunk of that going to AI research.
Tempus, a medical AI platform, raised $410m in its IPO.
Stability released Stable Diffusion 3 Medium. It really struggles to generate anatomically correct humans; users are blaming that on it filtering out adult content in its training data.
It’s a nice demonstration of how, if you’re going to release the weights of the model, it’s basically impossible to make it functional while also stopping it from being used to create nonconsensual porn.
Luma launched Dream Machine, a text-to-video model.
Hugging Face bought Argilla for $10m. It said lots of startups are looking to sell.
Samsung said it’ll better integrate its memory, foundry, and packaging services for AI chips, cutting the time to produce AI chips by 20%.
Rebellions and Sapeon, two South Korean AI chipmakers, are in talks to merge.
Huawei said its new chip is better than Nvidia’s A100.
Nvidia reportedly shipped 3.8m data-centre GPUs last year, up from 2.6m in 2022.
The Arm-Qualcomm legal battle could cause problems for AI PC shipments.
50% of Japanese chip-making equipment exports are going to China.
40% of VC funding is going to AI companies.
Moves
Kevin Weil, who used to run product at Twitter and Instagram, joined OpenAI as chief product officer. Sarah Friar, former CEO of Nextdoor, is the company’s new CFO.
Paul Nakasone joined OpenAI’s board of directors. He used to run the NSA, and OpenAI’s press release says he’ll contribute to improving the company’s cybersecurity.
Samsung is reportedly hiring former Apple exec Murat Akbacak to lead a new AI team, which will combine its Toronto and Mountain View AI orgs.
OpenAI’s global affairs team is now 35 people strong, and aims to grow to 50 by the end of the year.
Lauren Nolte, previously director for strategy and operations at Avoq, joined a16z as an executive assistant for government affairs.
Zico Kolter is the new director of Carnegie Mellon’s Machine Learning Department.
Heidy Khlaaf is joining AI Now as principal research scientist.
Divyansh Kaushik and Ben Schramm joined Beacon Global Strategies as vice presidents.
Joel Burke, a Horizon fellow who’s been working for Sen. Mike Rounds, is joining Mozilla as a senior public policy and government relations analyst.
Stephen McAleer joined OpenAI to research agent safety.
Dave Burke is now working on AI/bio projects at Google. He used to run Android’s engineering team.
Rory McCarthy left Control AI, which is “restructuring/changing direction”.
Some US-based AI companies are reportedly relocating their China-based engineers.
The Stanford Internet Observatory, which has produced some of the best research on AI CSAM, is reportedly winding down.
Scale AI said it is instituting an “MEI” hiring policy: “merit, excellence, and intelligence”.
Best of the rest
Perplexity is reportedly seeking revenue-sharing deals with publishers. Last week, Forbes accused it of plagiarism.
Microsoft delayed launching its controversial Recall AI feature.
Francois Chollet and Mike Knoop launched a $1m prize to beat — and open source — a solution to the ARC-AGI benchmark.
Nvidia topped the new MLPerf benchmarks.
Polling suggests that voters like the Senate AI roadmap. There’s bipartisan support for most stuff, but Dems seem to like regulation more.
OpenAI seems to have undercover security guards outside its office.
Clearview AI agreed a settlement which will offer equity to people included in its database.
Ed Zitron said Sam Altman’s a “false prophet”.
Hugging Face wrote a Teen Vogue op-ed about “how to stop deepfake porn using AI”. Incredibly, at no point does it reckon with Hugging Face’s own role in the creation of deepfakes.
Human Rights Watch found that images and personal details of Brazilian children were included in Common Crawl and LAION-5B.
Anthropic put out a post on the challenges of red teaming.
A new paper found that language models can “sandbag”, or strategically hide dangerous capabilities during evals. The authors say that prompting or fine-tuning can make this happen.
The Collective Intelligence Project and Anthropic published a paper on “collective constitutional AI”.
CSIS put out a report on how “AI could shape the future of deterrence”. CNAS, meanwhile, has a new report on “artificial intelligence, catastrophes, and national security”.
Brazil is using AI to screen lawsuits.
Brian Potter published an excellent explainer on the difficulties of building AI data centres.
Ciaran Martin said it's unlikely deepfakes will harm elections.
A new documentary about DeepMind and AGI premiered at Tribeca.
An “AI candidate” is running for MP in the UK.
Thanks for reading, see you next week.