Lawrence Lessig is very worried about freely available AI model weights
"Open weights create a unique kind of risk," says the open source pioneer
Lawrence Lessig is one of the open-source community's most prominent advocates. As founder of Creative Commons and a former board member for the Free Software Foundation and Electronic Frontier Foundation, Lessig is a legend in the free and open software community.
But the one-time advocate for openness now thinks that with AI, the risks of freely available code may be too great. In a recent interview with Transformer, Lessig said that “open-weight [AI models] create a unique kind of risk”.
“You basically have a bomb that you're making available for free, and you don’t have any way to defuse it necessarily,” Lessig said, referring to AI models which have their code made available to be downloaded and modified by anyone. Users can easily remove safety guardrails from such models, making them more prone to misuse than closed-weight models such OpenAI’s GPT-4.
Open-weight models have been a significant contributor to AI “nudification”, AI-generated child sexual abuse material, AI-powered propaganda, and scams. Some researchers fear that future, more powerful models will be able to be used to create bioweapons or carry out cyberattacks.
Lessig argued that “we ought to be anxious about how, in fact, [AI] could be deployed or used, especially when we don’t really understand how it could be misused”. He noted that though current models are unlikely to pose a significant risk, future models might.
His comments come at a time when open-weight AI models are increasingly in the spotlight, thanks to regulatory efforts in California and elsewhere which would require safeguards on models with particularly dangerous capabilities. Yann LeCun, the chief AI scientist at Meta (which has made the weights for its Llama 3 model publicly available), has said “the future has to be open source”, despite the risks.
Lessig, who is now a professor at Harvard Law School and representing a group of OpenAI whistleblowers, dismissed comparisons to previous technologies, where access to program code is considered to have improved security and fostered innovation. “It’s just an obviously fallacious argument,” he said. “We didn’t do that with nuclear weapons: we didn’t say ‘the way to protect the world from nuclear annihilation is to give every country nuclear bombs.’”
“It’s not inconsistent to recognise at some point, the risks here need to be handled in a different kind of way ... The fact that we believe in GNU Linux doesn’t mean that we have to believe in every single risk being open to the world to exploit,” he added.
Lessig said his concerns are driven by computer scientists’ lack of knowledge about controlling AI systems. He said he is particularly worried about a system getting deployed “with a directive to protect itself against anybody who’s attempting to disable it,” making it “extremely hard” to regain control.
Despite his concerns, Lessig didn't entirely dismiss the value of open-source AI. “Especially for the developing world, the only way AI is going to be accessible is through open-source AI projects,” he said, acknowledging the potential benefits from widely accessible AI models. However, he stressed the need for better practices to mitigate risks, such as on-chip kill-switches for dangerous models, techniques to prevent the removal of safety guardrails from open-weight models, and work on how to better control AI.
“If you had that,” he said, “then maybe you wouldn’t worry ... but we don’t have any of those things right now — and so therefore, we’ve got to worry.”