Introducing Transformer
For the past year, I’ve been providing hundreds of AI professionals — including some of the most powerful people in government, academia, and non-profits — with a private weekly summary of everything they need to know. Now I’m opening it up to everyone.
Transformer is your weekly briefing of what matters in AI, specifically targeted at policymakers and people interested in AI policy. Focused on AI safety, it’s a quick and comprehensive digest of everything you need to know, with an eye on what’s happening and what people are saying.
You can read more about what I’m aiming to do with Transformer here. For now, on with today’s issue.
If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
The Senate AI Working Group (Sens. Schumer, Rounds, Heinrich and Young) released their roadmap for AI policy.
Most discussion of the roadmap has focused on the $32b in annual spending it calls for allocating to AI R&D, with lots about how the US has to stay ahead of China.
Other stuff that jumps out to me, mostly in the form of recommendations to committees:
“Support efforts related to the development of a capabilities-focused risk-based approach, particularly the development and standardisation of risk testing and evaluation methodologies and mechanisms”
“Investigate the policy implications of different product release choices for AI systems, particularly to understand the differences between closed versus fully open-source models”
“Develop an analytical framework that specifies what circumstances would warrant a requirement of pre-deployment evaluation of AI models.”
“Consider a capabilities-based AI risk regime that takes into consideration short-, medium-, and long-term risks”
“Develop legislation aimed at advancing R&D efforts that address the risks posed by various AI system capabilities”
“The AI Working Group acknowledges the ongoing work of the IC to monitor emerging technology and AI developed by adversaries, including artificial general intelligence (AGI), and encourages the relevant committees to consider legislation to bolster this effort and make sure this long-term monitoring continues.”
“Better define AGI in consultation with experts, characterise both the likelihood of AGI development and the magnitude of the risks that AGI development would pose, and develop an appropriate policy framework based on that analysis”
“AI has the potential to increase the risk posed by bioweapons and is directly relevant to federal efforts to defend against CBRN threats. Therefore, the AI Working Group encourages the relevant committees to consider the recommendations of the National Security Commission on Emerging Biotechnology and the NSCAI in this domain, including as they relate to preventing adversaries from procuring necessary capabilities in furtherance of an AI-enhanced bioweapon program.”
“Ensure BIS proactively manages these technologies and to investigate whether there is a need for new authorities to address the unique and quickly burgeoning capabilities of AI, including the feasibility of options to implement on-chip security mechanisms for high-end AI chips.”
“Develop a framework for determining when, or if, export controls should be placed on powerful AI systems.”
“Develop a framework for determining when an AI system, if acquired by an adversary, would be powerful enough that it would pose such a grave risk to national security that it should be considered classified”
I’ve included some other highlights in this thread.
Commenting on the report, Sen. Schumer said: “We're not gonna wait for one huge comprehensive piece of legislation to move any piece of legislation. We're gonna bring them to the floor as they come out.”
He also said that he hopes they “have some bills that certainly pass the Senate and hopefully pass the House by the end of the year”.
Schumer dismissed concerns that the roadmap is a stalling tactic: "We have always looked at the committees. We're not kicking the ball down the road. That's the next logical step. We're not deferring or delaying.”
Some people told Fast Company they were unimpressed.
Suresh Venkatasubramanian: “My overwhelming reaction is disappointment … I feel betrayed.”
Alondra Nelson: “It is, in fact, striking for its lack of vision.”
Similar quotes from others in WaPo:
Evan Greer: “This road map leads to a dead end”
Though TechNet likes it, saying it “will strengthen America’s global competitiveness in AI and emerging technologies”.
On Transformer: Ilya Sutskever and Jan Leike are both leaving OpenAI, joining a recent exodus of safety-minded people from the company.
A total of eight people known for their concerns about AI safety have recently left OpenAI, with at least one saying he quit “due to losing confidence that [OpenAI] would behave responsibly around the time of AGI”.
Notably, four of the departing employees appear to not have signed the November letter calling for Sam Altman’s reinstatement as CEO.
The interim publication of the International Scientific Report on the Safety of Advanced AI, a gigantic 132-page report chaired by Yoshua Bengio and featuring contributions from 75 AI experts, was published. Highlights:
“Malfunctioning or maliciously used general-purpose AI can also cause harm, for instance through biased decisions in high-stakes settings or through scams, fake media, or privacy violations.”
“As general-purpose AI capabilities continue to advance, risks such as large-scale labour market impacts, AI-enabled hacking or biological attacks, and society losing control over general-purpose AI could emerge, although the likelihood of these scenarios is debated among researchers.”
“There is considerable uncertainty about the rate of future progress in general-purpose AI capabilities.”
“Developers still understand little about how their general-purpose AI models operate.”
The overwhelming vibe from what I’ve read so far is “there is so much we don’t know and everything is uncertain”. Which seems true!
In the foreword, Michelle Donelan writes that the report “shines a light on the significant gaps in our current knowledge and the key uncertainties and debates that urgently require further research and discussion”.
OpenAI and Google both demoed multimodal AI assistants this week: GPT-4o and Project Astra respectively. The real-time voice chat seems like a notable advance.
The text version of GPT-4o will be available for free.
Lots of other Google announcements, too:
It doubled Gemini 1.5 Pro’s context window to 2 million tokens, and announced Gemini 1.5 Flash (a cheaper and faster version), Gemma 2 (a new 27b parameter open-weights model) and PaliGemma (a vision-language model).
It launched its new Trillium chip, which it says has 4.7x better performance than its TPU v5e chip.
It announced lots of integration between Gemini and its other products, like search, Chrome, Gmail and photos.
And it said it will start doing “AI-assisted red teaming”, and is introducing watermarking to AI-generated text and video. It plans to open-source the watermarking tools.
Politico’s Brendan Borderlon has an excellent piece about how Big Tech lobbyists are gaining the upper hand in AI policy discussions.
Politico reports that IBM has ten full-time lobbyists working “to oppose AI licensing or closed-source mandates”.
Nvidia lobbyists, meanwhile, are reportedly “badmouthing a recent proposal by the Center for a New American Security think tank to require ‘on-chip governance mechanisms’”.
Meta, Andreessen Horowitz, Charles Koch and Hugging Face are all ramping up lobbying too.
The anti-regulation push seems to be working: Borderlon cites Sen. Todd Young as saying “the more people learn about some of these [AI] models, the more comfortable they are that the steps our government has already taken are by-and-large appropriate steps”.
The discourse
On the back of a bunch of UK AI announcements, Rishi Sunak said the UK wouldn’t rush to regulate:
“Too often regulation can stifle those innovators. We cannot let that happen … That’s why we don’t support calls for a blanket ban or pause in AI. It’s why we are not legislating. It’s also why we are pro-open source … There must be a very high bar for any restrictions on open source. But that doesn’t mean we are blind to risks.”
OpenAI co-founder John Schulman thinks AGI might be very, very close:
“I don't think this is going to happen next year but it's still useful to have the conversation. It could be two or three years instead.”
In the Wall Street Journal, Andreessen Horowitz partners Martin Casado and Katherine Boyle said that AI talks are leaving “little tech” out:
“Although the public-facing argument for AI regulation is to promote safety, we believe the true purpose is to suppress open-source innovation and deter competitive startups.”
(Not mentioned, of course, is that a16z is hardly a little guy.)
Y Combinator’s Garry Tan is pushing the same “little guy” narrative:
“We’re in the middle of the craziest battle for little tech vs Big Tech … it comes back to whether open source AI models are going to be allowed to thrive.”
Samuel Hammond notes that some AI regulation isn’t nearly as demanding as some would have you believe:
“Requiring safety testing and disclosures for the outputs of $100 million-plus training runs is not an example of regulatory capture nor a meaningful barrier to entry relative to the cost of compute.”
On Transformer, I argue that the “ethics vs safety” fight misses the bigger picture:
“Do we really think that Meta, of all companies, actually cares about the present-day harms from AI? Or could it perhaps be that stoking infighting among those advocating for AI regulation is a rather good way to stop any regulation from being passed at all?”
TechNet CEO Linda Moore doesn’t seem too hot on Scott Wiener’s AI bill:
“Creating a patchwork of state AI regulations is not in the best interest of anybody”
Sundar Pichai isn’t too worried about election deepfakes:
“I’m cautiously optimistic we’ll be able to do our part handling all of this well.”
In Royal Society Open Science, Huw Price argues that we shouldn’t ignore catastrophic risks just because they’re unlikely:
“I think we should at present be sceptical of those who are overly dismissive of claims about such risks, even if they have expertise in the field themselves. Have they really made a realistic estimate of the possibility that they themselves might be wrong, and set it against a proper assessment of the degree of certainty that a risk of this magnitude requires?”
Human rights lawyer Susie Alegre doesn’t like the “regulation stifles innovation” argument:
“I think there is the opposite risk, that if you allow AI to dominate in ways that undermine our ability to think for ourselves, to claim back our attention, we will lose the capacity to innovate.”
Julia Angwin says “AI is not even close to living up to its hype”:
“Should we as a society be investing tens of billions of dollars, our precious electricity that could be used toward moving away from fossil fuels, and a generation of the brightest math and science minds on incremental improvements in mediocre email writing?”
Policy
The Senate Rules Committee voted to advance the Preparing Election Administrators for AI Act with bipartisan support. The AI Transparency in Elections Act and the Protect Elections from Deceptive AI Act advanced on party lines; Mitch McConnell said he will oppose them.
The House Foreign Affairs Committee voted to advance the ENFORCE Act, which would allow the Commerce Department to apply export controls to covered AI models, such as those which “substantially [lower] the barrier of entry for experts or non-experts” to create chemical, biological, radiological, or nuclear weapons; or those which might permit “the evasion of human control or oversight through means of deception or obfuscation”.
It also advanced the Remote Access Security Act, which I think would expand export controls to cloud computing services (if I’m wrong, please correct me!).
The US and China held their first AI dialogue. It seemed… vague. (“Both sides recognized that while AI presents opportunities, ‘it also poses risks,’” Reuters reports.)
Matt Sheehan notes that the Chinese delegation was led by the Foreign Ministry North America bureau, which “indicates China treating dialogue as an aspect of US-China relations, not global tech risk”.
Chinese regulators are reportedly telling companies to buy more domestically-manufactured chips.
The US increased tariffs on Chinese semiconductor imports.
The UK AI Safety Institute open-sourced Inspect, its safety evaluations platform.
Influence
Jacob Helberg donated $1m to the Trump campaign.
Politico reported on the successful lobbying effort to kill Connecticut’s AI bill, led by the CTA, and the similar effort under way in Colorado right now, where Chamber of Progress and the US Chamber of Commerce are pushing Gov. Polis to not sign the bill.
On Transformer, I took a deep dive into Meta’s AI lobbying army.
In Q1, the company had 30 well-connected lobbyists across seven firms working on AI.
In total, Meta spent $7,640,000 on lobbying in the first quarter of 2024, according to OpenSecrets and my own analysis of lobbying disclosures. Of that, $315,000 went to firms lobbying on AI on its behalf, and a substantial fraction of its $6,693,750 internal lobbying spending was likely focused on AI, too.
Notable lobbyists include Rick Dearborn, formerly the deputy chief of staff to President Trump and executive director of the 2017 Trump transition team and now a partner at Mindset. Others include Luke Albee, former chief of staff to Sen. Mark Warner; Courtney Temple, former legislative director for Sen. Thom Tillis; Daniel Kidera and Sonia Gill, both former aides to Sen. Chuck Schumer; and Chris Randle, former legislative director to Rep. Hakeem Jeffries.
BGR Group got its clients (including Microsoft and IBM) to meet Preston Hill and Alex Scheur, staffers working on AI for Reps. Mike Johnson and Hakeem Jeffries respectively.
Encode Justice released AI 2030, a youth-led “platform for global AI action”. It’s signed by lots of young people, and also some big names (Yoshua Bengio, Mary Robinson, Margaret Mitchell).
It calls for governments to “establish a global authority to minimize the dangers of AI, particularly foundation models”, which should “set central safety standards, limit the proliferation of the most dangerous capabilities, and monitor the global movement of large-scale computing resources and hardware”.
Meta lobbyists Chris Randle, Christina Weaver Jackson and Jennifer Stewart (who also works for Microsoft) donated to Angela Alsobrook’s Senate primary campaign, Politico reports.
Industry
Stability AI is reportedly in sale talks: The Information reports that it lost more than $30m last quarter and owes $100m to cloud providers. It’s also reportedly talking about raising cash from a group that includes Sean Parker.
xAI is reportedly in talks for a $10b deal with Oracle for cloud compute.
Apple is reportedly close to a deal with OpenAI to have ChatGPT power some iOS features. It’s also reportedly putting its M2 Ultra chips in data centres to process its most advanced new AI features.
The UAE Technology Innovation Institute released Falcon 2 11B, which it says is more powerful than Llama 3.
Tenyx claimed it’s fine-tuned Llama-3 to “outperform OpenAI’s GPT-4 in certain domains”.
Sony Music reportedly told AI companies that they cannot use its music, including for training.
OpenAI and Reddit signed a licensing agreement.
ChatGPT now integrates with Google Drive and OneDrive.
01.AI launched Wanzhi, its first consumer-facing AI product.
Hugging Face CEO Clem Delangue said it’s “profitable, or close to profitable”. It launched a ZeroGPU program which will offer $10m worth of shared A100s for free.
Chinese firms CXMT and Tongfu Microelectronics have reportedly developed “sample HBM chips”. HBM is a key component in AI chips.
TSMC said ASML’s new machines are very expensive, and that TSMC probably won’t use them on its A16 node.
Intel is reportedly in talks with Apollo to build a fab in Ireland.
AMD MI300X and Microsoft Cobalt 100 chips will reportedly be available on Azure next week.
Ampere said it’s working with Qualcomm on cloud AI inference.
Microsoft said it would invest $4b in France.
Claude’s finally available in Europe.
The UK AI sector got some wins: CoreWeave said it would invest £1b; PolyAI raised at a “close to $500m” valuation, led by Hedosophia and with participation from Nvidia; and Bristol’s Isambard-AI supercomputer came online;
Snowflake is reportedly in talks to buy Reka AI for $1b.
Meta’s reportedly considering making AI-powered earphones with cameras.
Weka, which builds data pipeline solutions for AI developers, raised $140m at a $1.6b valuation.
Voxel51 raised $30m for its visual AI platform.
Moves
Nabiha Syed is Mozilla’s new executive director.
Jakub Pachoki is OpenAI’s new chief scientist.
Instagram co-founder Mike Krieger joined Anthropic as chief product officer.
Adam Selipsky is out as AWS CEO. Matt Garman is replacing him.
Kevin O’Buckley is the new head of Intel’s foundry business, replacing Stu Pann. O’Buckley was formerly an SVP at Marvell.
Lead OpenAI engineer Evan Morikawa left; he’s starting what sounds like a robotics company with Boston Dynamics’ Andy Barry and DeepMind’s Pete Florence and Andy Zeng.
Emil Michael, Rich Miner and Mikhail Parakhin joined Perplexity as advisors.
Microsoft is reportedly asking almost 800 China-based employees to relocate.
Replit cut 20% of its staff, with CEO Amjad Masad telling The Information that he wants to pivot to enterprise sales.
Best of the rest
Luminaries including Yoshua Bengio, Stuart Russell, and David “davidad” Dalrymple published a new paper outlining a framework for “guaranteed safe AI”.
A new paper found that people can’t reliably distinguish GPT-4 from humans in a Turing test.
The Guardian covered a new MIT paper which found that AIs are getting better at deception.
A new survey found that 78% of Americans are worried AI will influence the election.
Some experts are worried that Google’s new client-side AI scanning tools to detect scam calls will “[lay] the path for centralised, device-level client side scanning”
In TIME, Otto Barten outlined “how to hit pause on AI before it’s too late”.
Microsoft’s carbon emissions went up 30% last year, with the company suggesting AI demand was part of the cause.
German companies like Henkel seem optimistic that AI can boost productivity.
Crunchbase data showed that more than 50% of global AI-related venture funding last year went to Bay Area-based companies.
Data and Society put out a policy brief saying that “AI governance needs sociotechnical expertise”.
The Washington Post wrote about deepfake detection startups, which seem to overstate their capabilities.
The FT’s John Thornhill reviewed three AI books: Nigel Toon’s How AI Thinks; Verity Harding’s AI Needs You; and Nigel Shadbolt and Roger Hampson’s As If Human.
Chris Stokel-Walker’s new book, How AI Ate the World, was published.
Fast Company has a nice piece about the tabletop exercises Arizona’s running to prepare for AI election disinformation.
Coming up
May 21-22: The AI Seoul Summit is happening next Tuesday 21st and Wednesday 22nd. Politico reports that the first day will bring nine politicians together to make a “Seoul declaration” on safety; the second will have wider discussions among 29 countries (including China). Yoshua Bengio’s International Scientific Report on Advanced AI Safety is also set to be published.
May 21: The Existential Risk Observatory is hosting a virtual event with Michelle Donelan, Yoshua Bengio, Max Tegmark, and others.
May 22: Senate Commerce will mark up AI bills, according to Axios, likely including the CREATE AI Act, Future of AI Innovation Act, AI Research Innovation and Accountability Act, and the Promoting US Leadership in Standards Act.
Sen. Cantwell’s AI workforce bill and Sen. Hickenlooper’s AI auditing standards bill “could be ready for a vote” too, Axios reports.
May 22: The House Homeland Security Committee is holding a hearing on “harnessing artificial intelligence to defend and secure the homeland”.