Welcome to Transformer, your weekly briefing of what matters in AI. If you’ve been forwarded this email, click here to subscribe and receive future editions.
Top stories
On Transformer: At a busy markup session on Wednesday, the Senate Commerce Committee passed a heap of AI bills.
Most notable was the Future of AI Innovation Act, which would formally establish AISI, tell it to develop AI standards and voluntary guidelines, and create “new AI testbeds with national laboratories to evaluate AI models”.
Earlier this week, lots of tech companies — including OpenAI, Meta, and Microsoft — endorsed the bill.
Several of Sen. Ted Cruz’s amendments to the bill passed, though, including one which targets lots of AI ethics provisions.
It tells the President to “issue a technology directive … that prohibits any [actions, rules or guidance] by a Federal agency” that says things like:
AI systems should be “designed in an equitable way”
AI developers should “conduct disparate impact or equity assessments”.
In his opening remarks, Cruz issued some choice quotes:
“Some wealthy and well-connected AI entrepreneurs and their corporate allies hype up AI inventions as uniquely powerful and dangerous. Of course, these existential dangers are all theoretical; we’ve never actually seen such damage from AI. Nevertheless, these companies say they need to be saved from themselves by the government because their products are so unsafe.”
“China is just as happy to sit back and let the U.S. Congress do the work of handicapping the American AI industry for it … To avoid the U.S. losing this race with China before it has even hardly begun, Congress should ensure that AI legislation is incremental and targeted.”
The committee also passed the Validation and Evaluation for Trustworthy AI Act.
That one directs NIST to “develop detailed specifications, guidelines, and recommendations for third-party evaluators to work with AI companies to provide robust independent external assurance and verification of how their AI systems are developed and tested”.
And it passed the AI Research, Innovation, and Accountability Act.
That one would require NIST to “develop recommendations to agencies for technical, risk-based guardrails on ‘high-impact’ AI systems”, and require “companies deploying critical-impact AI to perform detailed risk assessments”.
Tech stocks have been bouncing around amid earnings this week, with AI a particularly big discussion topic.
On Meta’s earnings call, Mark Zuckerberg reiterated how essential AI is to its future, pitching investors on the need for huge infrastructure investments.
Revenue, he said, will mostly come from businesses using its AI tools for things like creating ads or operating customer service bots.
Meta AI, meanwhile, is supposedly “on track to be the most-used AI assistant in the world by the end of the year”.
He also said Meta’s preparing to train Llama 4, which will need “almost 10 times more” compute than Llama 3. At another event this week, he said Meta’s bought 600,000 H100s.
Microsoft, meanwhile, said it’ll spend even more on AI infrastructure next year, having spent $19b on capex last quarter.
Amazon said AWS’s AI business has a “multi-billion dollar revenue run rate”, and that it needs more compute capacity.
And Apple said it expects its new Apple Intelligence features to drive iPhone sales later this year.
Samsung’s profit soared amid AI chip demand, while AMD raised its forecast and said it now makes almost half its revenue from data centre products. (AMD also said its MI300 chip will be supply-constrained throughout next year).
On Transformer: AI companies are falling short on their promises to the White House.
It’s been a year since leading AI companies made voluntary commitments to the White House on AI safety. But adherence is patchy.
OpenAI’s “bug bounty” program excludes issues with its models, despite that being part of the commitments.
Microsoft and Meta, meanwhile, do not appear to test their models for capabilities like the ability for a model to self-replicate — a task explicitly mentioned in the commitments to the White House.
(When asked about this, a Meta spokesperson said that while the company doesn’t specifically test for self-replication, it does assess its models for autonomous cyber capabilities, which could be considered a precursor to self-replication.)
And while most companies are technically meeting most commitments, the implementation is often weak. Put together, that raises a lot of concerns about the efficacy of self-regulation.
The discourse
Ed Zitron thinks OpenAI is screwed:
“I ultimately believe that OpenAI in its current form is untenable. There is no path to profitability, the burn rate is too high, and generative AI as a technology requires too much energy for the power grid to sustain it, and training these models is equally untenable, both as a result of ongoing legal issues (as a result of theft) and the amount of training data necessary to develop them.”
Nicholas Kristof is worried about AI-generated bioweapons and other risks:
“Managing AI without stifling it will be one of our great challenges as we adopt perhaps the most revolutionary technology since Prometheus brought us fire.”
Jacob Helberg laid out how the AI policy debate is changing (conveniently, it’s apparently changing entirely in the direction of his views!):
“The debate on AI policy is undergoing a needed recalibration from safety and risk-mitigation toward national security.”
AI Action Summit boss Anne Bouverot suggested that it will go beyond voluntary commitments:
“At some point, you need to stop just doing incremental voluntary commitments and move to concrete actions.”
US AISI boss Elizabeth Kelly said we still don’t know how to make AI safe:
“There’s growing evidence that current technological approaches to AI safety are not sufficient to address the wide variety of risks that are posed by advanced generative AI models. We know that current safeguards for AI models need more work if we’re going to rely on them to protect people from harm.”
Martin Casado and Ion Stoica said there’s no point restricting open-source AI to hamper China, because China will just steal closed-source models:
“The inability of American companies to keep proprietary, infrastructure-critical IP secure has a long history.”
Lawrence Lessig said that the risks might be too great to not restrict open-weights AI:
“Whatever model weights can teach, that benefit must be weighed against the enormous risk of misuse that highly capable models present. At some point, that risk is clearly too great.”
Policy
The NTIA published a “Report on Dual-Use Foundation Models with Widely Available Model Weights”, which recommends the government “actively monitor for potential risks to arise, but refrain from restricting the availability of open model weights for currently available systems”.
The US AI Safety Institute published draft guidelines on “Managing Misuse Risk for Dual-Use Foundation Models”. It’s all very sensible stuff about planning, evaluating, mitigating risks, and improving security.
It’s open for public comment until September 9th.
The White House announced a bunch of other AI safety-related deliverables, too (nothing groundbreaking in there, though).
DOJ antitrust officials are reportedly looking into Nvidia. They’ve apparently reched out to competitors recently to ask if Nvidia’s using unfair practices to prevent competition.
The FT said the DOJ’s looking into Nvidia’s Run:ai acquisition in particular.
OpenAI responded to the senators who queried its safety practices, saying it’s “dedicated to implementing rigorous safety protocols at every stage”.
On Twitter, Sam Altman said the company is “allocating at least 20% of the computing resources to safety efforts across the entire company” and is “working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model”.
Sen. Chuck Grassley, meanwhile, asked OpenAI to outline what it’s done to make sure employees can whistleblow to the government.
New US chip manufacturing equipment export controls won’t apply to exports from Japan, the Netherlands or South Korea, Reuters reported. ASML and other chip stocks jumped on the news.
But: the US is reportedly also considering taking measures to block Chinese access to HBM chips.
The House Select Committee on the CCP said the UAE Ambassador blocked committee staffers from meeting G42 to discuss the company’s relationship with China. (The UAE said there was a “miscommunication”.)
The UK AI Bill will make AI companies’ voluntary commitments to the government legally binding, Peter Kyle reportedly told tech companies. It will also turn UK AISI into an “arm’s length government body”.
Google, Microsoft, Apple, Meta, Nvidia, Amazon, and Andreessen Horowitz were at the meeting with Kyle.
The UK Competition and Markets Authority is looking into whether the Google-Anthropic partnership has resulted in a “substantial lessening of competition”.
The UK cut $640m in funding for the AI Research Resource, and $1b for a new exascale supercomputer.
Sens. Coons, Blackburn, Klobuchar and Tillis introduced the NO FAKES Act, designed to “protect the voice and visual likenesses of creators and individuals from the proliferation of digital replicas created without their consent”.
Reps. Auchincloss and Hinson introduced the Intimate Privacy Protection Act, which would remove Section 230 immunity for companies that don’t have a “reasonable process” for addressing “cyberstalking, intimate privacy violations, and digital forgeries”.
The EU AI Office launched a “a multi-stakeholder consultation on trustworthy general-purpose AI models under the AI Act”, and a call for expression of interest to participate in drawing up the GPAI Code of Practice.
The OECD launched a public consultation on “risk thresholds for advanced AI systems”.
Influence
556 organisations lobbied on AI-related issues in the first half of 2024, up from 459 last year.
OpenAI spent $800,000 in H1, while Anthropic spent $250,000. Cohere spent $120,000.
Microsoft called for laws to tackle deepfake fraud and nonconsensual explicit deepfakes.
OpenAI, meanwhile, endorsed the Future of AI Innovation Act, NSF AI Education Act, and the CREATE AI Act.
Google DeepMind researchers met White House officials to discuss sociotechnical AI safety research.
Industry
Google essentially acquihired Character.AI.
Co-founders Noam Shazeer and Daniel De Freitas are rejoining Google, and investors are reportedly being bought out at a valuation of about $2.5b.
Google is hiring 30 of Character’s employees, and paying a licensing fee for its models.
Character will supposedly stick around, but will now use open-source models instead of building its own.
Google released an experimental new version of Gemini 1.5 Pro. It’s ranked #1 on Chatbot Arena.
Google released a new, smaller version of its Gemma 2 open-source model. It also released Gemma Scope, a “comprehensive, open suite of sparse autoencoders for language model interpretability”.
The accompanying demo is an excellent and very fun introduction to interpretability and how models work.
Google’s also adding Gemini-powered features to Chrome.
TikTok has been paying $20m a month to use OpenAI’s models (via Microsoft), according to The Information. In total, Microsoft reportedly generates about $1b a year in revenue from reselling OpenAI tools.
The new ChatGPT voice features are starting to roll out.
Apple revealed that its new AI models were pretrained on Google TPUs.
Specifically, it used “8192 TPUv4 chips provisioned as 8x1024 chip slices”.
Meta’s AI assistant denied that Trump was shot at. The company apologised for it.
Meta now lets Instagram creators create a bot that can chat with fans as them. It killed its celebrity chatbot service, though.
Google is tweaking Search to tackle nonconsensual explicit deepfakes.
Samsung expects its HBM3E chips to be approved by Nvidia in “two to four months”, according to Bloomberg.
xAI has considered buying Character.AI, The Information reported. Elon’s denied that it is still considering an acquisition, though.
Canva bought Leonardo.AI.
Stability announced Stable Fast 3D, which generates 3D objects from a single image.
Black Forest Labs, a new AI image and video gen startup, launched with $31m in funding (led by Andreessen Horowitz).
It released its first models, FLUX.1, which seem pretty impressive. The weights for some versions of it are freely available.
Contextual AI raised $80m. It’s building tools to improve AI models.
The very popular chatbot app Talkie is owned by Chinese firm MiniMax, the WSJ reported.
Moves
Karen Kornbluh is the new White House OSTP “principal deputy U.S. Chief Technology Officer and OSTP deputy director for technology”.
Lisa Einstein is the Cybersecurity and Infrastructure Security Agency’s first chief AI officer.
OpenAI’s Chris Lehane joined Coinbase’s board.
Best of the rest
The NYT argues that China’s catching up to the US on AI, citing Kuaishou’s Kling model in particular.
Anthropic was scraping sites even after they’d blocked the companies’ bots (because it kept changing which scraper it used). It said it won’t do that anymore.
Reddit, meanwhile, is mad that Microsoft, Anthropic and Perplexity keep trying to scrape it without spying.
Perplexity launched a revenue-sharing deal with media publishers.
404 Media found that Runway’s video generation model was trained on scraped YouTube videos and pirated films.
Suno and Udio insisted in legal filings that their use of copyrighted songs to train models is fair-use.
A Center for AI Safety paper found that “many safety benchmarks highly correlate with upstream model capabilities, potentially enabling ‘safetywashing’”.
Another CAIS paper claims to have discovered a way of building “tamper-resistant safeguards into open-weight LLMs such that adversaries cannot remove the safeguards even after thousands of steps of fine-tuning”.
Vox has an explainer of AI interpretability and its links to neuroscience.
NIST re-released its Dioptra AI testbed tool.
Bloomberg profiled Helen Toner.
And Toner’s employer, CSET, published a bumper report on the emergence of EUV lithography, and the lessons for future emerging technologies.
JPMorgan rolled out a toll for employees which it says can do the work of a research analyst.
Arvind Narayanan and Sayash Kapoor said that “AI existential risk probabilities are too unreliable to inform policy”.
AI Now published a very interesting looking report on lessons from the FDA for AI.
OnlyFans stars are increasingly using bots to sext with their subscribers.
A new AI wearable, Friend, launched. The main thing everyone talked about was how it spent $1.8m on a domain.
Thanks for reading; have a great weekend.