Transformer Weekly — July 12
Republicans repealing AI EO | OpenAI’s ‘failed’ safety processes | FTC on open weights
Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
The Republican National Committee published its policy platform, which said the party would repeal Biden’s executive order on AI.
The document describes the EO as “dangerous”, arguing that it “hinders AI innovation, and imposes Radical Leftwing ideas on the development of this technology”.
“In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.” (All that bizarre capitalisation is courtesy of the RNC, by the way).
The move’s already been praised by tech trade group NetChoice: VP Carl Szabo told TIME that repealing the EO “would be good for Americans and innovators”.
Many others, though, are worried. AI Now’s Amba Kak said a repeal would “feel like going back to ground zero”, while Sandra Wachter said it would be a “very big loss”.
Though the RNC characterises the EO as being left-wing, much of it is actually dedicated to national security issues (such as requiring developers to share safety test results if their models pose “a serious risk to national security, national economic security, or national public health and safety”).
Ami Fields-Meyer, who who worked on the EO, pointed out to TIME that it’s odd for Republicans to be opposing such measures: “If the Trump campaign believes that the rigorous national security safeguards proposed in the executive order are radical liberal ideas, that should be concerning to every American,” he said.
Reid Hoffman, meanwhile, said undoing the EO “would be a gift to China and others”.
Bloomberg Government says the move might actually spur Congress into legislating on AI, though.
Some members of OpenAI’s safety team “felt pressured to speed through” a safety testing process ahead of the launch of GPT-4o, the Washington Post reported.
The evaluations were done in a single week, which some employees felt was too rushed. “We basically failed at the process,” one employee said, noting that the launch after-party was planned “prior to knowing if it was safe to launch”.
Another employee pushed back on the idea that anything negligent was done, but did admit that “this [was] just not the best way to do it”.
In a statement, OpenAI said it would allow for more testing time in future.
The discourse
Yoshua Bengio published an excellent article responding to common objections against taking AI safety seriously:
“I worry that with the current trajectory of public and political engagement with AI risk, we could collectively sleepwalk – even race – into a fog behind which could lie a catastrophe that many knew was possible, but whose prevention wasn’t prioritised enough.”
The FTC put out a statement on open-weights foundation models:
“They have the potential to be a positive force for innovation and competition … [but] pose additional risks [and] have already enabled concrete harms, particularly in the area of nonconsensual intimate imagery and child sexual abuse material”.
Lee Saedol, who famously lost a Go match to an AI, wants society to wake up to the pace of progress:
“I faced the issues of AI early, but it will happen for others. It may not be a happy ending.”
Gabriel Nicholas said AI red-teamers need information on how people are actually using these technologies:
“The ability to see how AI is being used in the real world will likely make the difference between strategic and specific regulation and a fingers-crossed approach to mitigating the technology’s most dangerous effects.”
Policy
Sens. Cantwell, Blackburn and Heinrich introduced the COPIED Act, which would give content creators control over whether AI models can be trained on their work.
Reps. McCaul and Moolenaar called for an intelligence community assessment of Microsoft’s G42 partnership. They’re worried about the latter’s ties to China.
Sen. Ron Wyden called for the FTC to investigate the exodus of privacy and regulatory staff from Google in recent months. He thinks the staffing changes might mean Google’s violating a deal it made with the agency in 2011 to “maintain a comprehensive privacy program”, and is worried the company’s releasing AI products without sufficient protections.
The Senate Commerce Committee held a hearing on AI and privacy this week.
The US government plans to spend up to $1.6b on advanced chip packaging R&D, a crucial technology for AI chips.
An AI bill “may … be included” in Labour’s first King’s Speech next Tuesday, The Guardian reports.
The EU AI Act was published in the Official Journal, and will come into force on August 1. That means codes of practice have to be drawn up by May 1, 2025.
The EU AI Office will “launch a call for expression of interest as early as next week to select the stakeholders to draft the codes of practice”, according to MLex.
Influence
OpenAI joined BSA | The Software Alliance.
On Transformer: Industry efforts to kill California’s AI regulation bill are ramping up.
Andreessen Horowitz recently launched “stopsb1047.com”, urging voters to lobby their representatives against the bill.
Y Combinator, meanwhile, recently organised a letter from startups claiming the bill “would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software”.
Scott Wiener, the state senator behind the bill, lambasted both orgs’ statements, calling them “inaccurate” and “highly inflammatory”. The jail claim, he said, was “categorically false”, “irresponsible”, and “a scare tactic”.
In their campaigns, Andreessen Horowitz and Y Combinator have positioned themselves as speaking up for “the little guy”. Yet reality is rather different. Not a single Big Tech company has publicly supported the bill, and many oppose it. A recent California Assembly Committee on Judiciary analysis lists Google as opposing unless amended, Meta as having “concerns”, and IBM as fully opposing.
In fact, all major AI players — Apple, Microsoft, Google, Meta, Amazon, OpenAI and Anthropic — are either directly lobbying against the bill, or belong to organisations that are, including TechNet, the Consumer Technology Association and Chamber of Progress.
Forbes has a great profile of Jacob Helberg, who might have played some role in the RNC’s pledge to repeal the AI EO.
One particularly interesting paragraph: “To some people close to Helberg, his conversion to Trump-Republican (‘a true love story,’ he told The Information) is little more than advantageous positioning. Until recently ‘he didn't have a view on AI or had ever expressed this super close relationship to Trump,’ said one Silicon Valley bigwig who travels in Helberg’s circles. ‘Now's the time to get noticed by Trump because he's paying attention to anything that helps him get elected.’”
David Sacks is reportedly due to speak at next week’s Republican National Convention.
Labour reportedly held its post-election reception for MPs at Google’s offices.
Industry
Microsoft gave up its OpenAI board observer seat, and Apple reportedly won’t take one. The moves are apparently intended to assuage antitrust concerns.
Meta will reportedly release Llama 3 405B on July 23.
OpenAI has developed a five-tier system for tracking its progress towards AGI, Bloomberg reported.
The tiers are as follows:
Level 1: Chatbots, AI with conversational language
Level 2: Reasoners, human-level problem solving
Level 3 : Agents, systems that can take actions
Level 4 : Innovators, AI that can aid in invention
Level 5 : Organisations, AI that can do the work of an organisation
Executives reportedly told employees that its current models are level 1, but that it’s close to reaching level 2. They demoed a version of GPT-4 which “shows some new skills that rise to human-like reasoning”.
Oracle and xAI’s $10b chip rental deal is reportedly dead, with xAI instead just building its own AI data centre in Memphis. Dell and Supermicro will provide the chips.
Oracle does have a deal with Microsoft, though, which will reportedly involve it building a cluster of 100,000 GB200 chips (to be used by OpenAI).
Andreessen Horowitz is reportedly building a 20,000 GPU cluster for its portfolio companies to use. It’s already got “thousands” of chips, The Information reports.
OpenAI now blocks access to users in China, but people can still access its models via the Azure China platform, The Information reported.
AMD bought Silo AI, which builds custom LLMs for companies, for $665m.
SoftBank bought Graphcore for a reported $600m. The company had raised $700m and was valued at $2.5b in 2020.
Helsing raised $487m at a reported $5.4b valuation.
Groq is reportedly raising $300m, led by BlackRock, at a $2.2b valuation.
Fireworks AI raised $52m at a $552m valuation.
Index Ventures raised $2.3b, which will largely be invested in AI.
Corning raised its sales forecast, saying the outperformance “was primarily driven” by sales of its optical connectivity products for GPU interconnections in AI data centres.
OpenAI announced a partnership with Los Alamos National Lab, with the latter planning to use OpenAI’s products to augment its bioscience research.
AWS now lets users finetune Claude 3 Haiku within Amazon Bedrock.
ScaleAI is AWS’s first model customisation and evaluation partner.
The Information, meanwhile, argues that Scale’s valuation doesn’t make any sense.
Anthropic made it easier for developers to use Claude for prompt engineering.
Samsung announced a bunch of AI features for its new phones.
Poe now lets users create web apps.
Bioptimus released a model for diagnosing diseases from images.
EvolutionaryScale released a protein language model which has created new fluorescent molecules.
The OpenAI Startup Fund and Arianna Huffington’s Thrive Global are funding Thrive AI Health, a new company which will “build a customised, hyper-personalised AI health coach”.
Moves
Former Meta lobbyist Martin Signoux is OpenAI’s new AI policy lead for Europe.
Halak Shrivastava and A.J. Bhadelia, both ex-Google, are Cohere’s new public policy and regulatory affairs lead and North American government affairs leads, respectively.
Tomek Korbak, formerly of Anthropic, joined the UK AI Safety Institute.
Will Hurd joined the board of Personal AI. He was formerly a congressman and an OpenAI board member.
Zarinah Agnew is joining the Collective Intelligence Project as a senior fellow.
Max Zeff joined TechCrunch as a senior writer, covering AI.
Intuit is laying off 1,800 people, and is reorganising the company around AI.
Lots of Samsung workers have gone on strike.
Best of the rest
A bearish report on AI from Goldman Sachs got a lot of attention this week. I liked Noah Smith’s response.
FutureSearch estimates that 55% of OpenAI’s revenue comes from people subscribing to ChatGPT, while 21% comes from ChatGPT enterprise and 15% from API access.
New polling suggests that 75% of both Republicans and Democrats think “taking a careful controlled approach” on AI is more important than racing to beat China.
A report from the Social Market Foundation, UK Day One Project and Institute for AI Policy and Strategy argued that there will soon be a huge AI assurance technology industry, and that the UK has the potential to play a leading role in it.
Stan McChrystal, Mark Sedwill, and a bunch of other big names played an AGI wargame, which was filmed for a documentary coming out early next year.
Russia has been running an AI-powered propaganda mill, which has now been taken down by the US, Netherlands and Canada.
LawAI’s Cullen O'Keefe put out a “Chips for Peace” framework, which would commit countries to domestically regulating AI, sharing the benefits of safe systems broadly, and ensuring that nonmembers of the framework can’t undercut the commitments.
A Spanish youth court sentenced 15 minors to a year of probation for spreading AI-generated nudes of their classmates.
Scammers are deepfaking company CEOs’ voices to trick employees into sending them money.
An analysis of the New York Times’ AI coverage found that its reporting is “disproportionately influenced by the perspectives of individuals within the commercial technology industry”.
Google said Gemini 1.5 Pro’s longer context window helps its robots perform better.
Metaculus launched an AI forecasting benchmark tournament, to see how good bots are at forecasting.
A new study found that ChatGPT’s performance on coding problems dropped significantly if they were published after 2021 (suggesting it’s relying on problems and solutions existing in its training data).
83% of Chinese decision-makers said they used generative AI, compared to only 65% in the US.
Anthropic is partnering with the Edinburgh Fringe, and will run a bunch of AI education workshops at the festival.
Thanks for reading. Have a great weekend — see you next week.