Wiener: SB 1047 is 'not looking to cover startups'
At a town hall event, California Sen. Scott Wiener said the bill is facing pushback from big tech companies
California Senator Scott Wiener defended his SB 1047 bill against critics at an event on Thursday, dismissing accusations that the bill was drafted by tech giants and would hurt open source developers.
“The big tech companies have not been cheerleading for this bill, to put it mildly,” Wiener said, noting the opposition from tech trade association TechNet. The AI Alliance, backed by Meta and IBM, has also opposed the bill, as have partners at venture capital giant Andreessen Horowitz.
Emphasising that the bill was only focused on the “absolute largest models”, Wiener announced that the bill would soon change to stress this further: instead of covered models being defined as those trained with 10^26 FLOPs, they’ll now be defined as those trained with 10^26 FLOPs and cost at least $100 million to train.
“We’re not looking to cover startups, we’re not looking to cover folks who are creating smaller models,” Wiener stressed. “I want this to be light touch.”
Wiener also argued that the bill would not harm the open source ecosystem as some have suggested, clarifying that its shutdown provision only applies to models in the developer’s possession, and that developers would not be liable for the effects of their model if someone else had significantly fine tuned it. He said both things would be clarified in amendments to the bill shortly.
Earlier in the week, California Governor Gavin Newsom dodged questions about the bill, instead saying that he worries about over-regulation but also takes warnings from AI developers about the technology’s risks seriously.
Wiener was joined at the event by Ari Kagan of Economic Security Project Action, a sponsor of the bill. Kagan emphasised many of the same points as Wiener, saying that the covered model definition is intended to target “a very narrow group of developers” — namely, “an extraordinarily small number of companies spending hundreds of millions of dollars to train their models”.
The bill, Kagan said, requires such companies to test for an “extraordinarily narrow subset of extreme risks” in their frontier models, such as the ability for the model to make it significantly easier to develop bioweapons or carry out cyber attacks on critical infrastructure. If they find those extreme risks, the bill imposes “additional obligations” on companies to mitigate those risks. “The types of things that you're going to need to do are going to be proportional to the amount of risk you're seeing,” he said. “You’re going to need to put in place safeguards, if you do have these risks.”
Kagan acknowledged that many critics are asking why the bill regulates the application layer, as opposed to focusing on how the models are actually used. Importantly, he said, the type of risks the bill is designed to address are already illegal. “Unfortunately, that’s not enough on its own,” Kagan said, noting that “if a terrorist organisation wants to use an extraordinarily powerful model to cause catastrophic harm … the fact that you’ve made that illegal is not going to stop them from doing it”.
“There are some risks where you don’t want to just sit around and wait to try to send someone to jail after it happens, because then the train has already left the station,” he added.
Wiener noted that advocacy groups had pressed him to implement harsher regulation, which he declined to do, saying he rejected proposals that would require companies to get a license in order to train models above a certain size. “I rejected that — I don’t want the government involved in deciding what models can and can’t be training,” he said.
Closing the session, Wiener responded to some of the more outlandish criticism of the bill. “I don't think this is thought policing: it's about doing a safety evaluation.”