OpenAI employee says he was fired for raising security concerns to board
Leopold Aschenbrenner also said he was interrogated about his team’s “loyalty to the company”
Leopold Aschenbrenner, an OpenAI safety researcher who was reportedly fired earlier this year for “leaking”, has now offered his side of the story.
In an interview with Dwarkesh Podcast, published on Tuesday, Aschenbrenner said OpenAI told him he was fired for raising security concerns to the company’s board. Aschenbrenner was also ousted for sharing a document that OpenAI alleged contained sensitive information, a charge which he denies.
Aschenbrenner said he first received a formal warning from OpenAI’s HR department after he raised concerns about OpenAI’s security practices to the board. “I wrote an internal memo about OpenAI's security, which I thought was egregiously insufficient to protect against the theft of model weights or key algorithmic secrets from foreign actors,” Aschenbrenner said, noting that he “shared this memo with a few colleagues and a couple of members of leadership, who mostly said it was helpful.”
A few weeks later, OpenAI suffered a major security incident, Aschenbrenner said, prompting him “to share the memo with a couple of board members.” But he was swiftly reprimanded. “It was made very clear to me that leadership was very unhappy I had shared this memo with the board,” he said. “Apparently, the board hassled leadership about security.”
Aschenbrenner said he received an “official HR warning” for this. Though he wasn’t fired immediately, when he was fired several months later the company told him this incident was a contributing factor. “When I was fired, it was made very explicit that the security memo was a major reason for my being fired,” Aschenbrenner said. “They said, ‘the reason this is a firing and not a warning is because of the security memo.’”
The incident which immediately preceded Aschenbrenner’s firing was, according to him, innocuous. Aschenbrenner had written a document on “preparedness, safety, and security measures”, which he shared with some external researchers. This, he said in the interview, was “totally normal at OpenAI at the time”, and the document had been scrubbed of sensitive information.
OpenAI reportedly told Aschenbrenner that the document contained sensitive information because it included “a line about planning for AGI by 2027-2028”. But Aschenbrenner notes that this planning timeline was publicly available information: OpenAI had mentioned wanting to solve alignment in four years in its own Preparedness document, published months before Aschenbrenner shared his own document.
In Aschenbrenner’s telling, it sounds like OpenAI was looking for a reason to fire him. Before he was fired, Aschenbrenner said a lawyer asked him “about my views on Al progress, on AGI, the appropriate level of security for AGI, whether the government should be involved in AGI, whether I and the superalignment team were loyal to the company, and what I was up to during the OpenAI board events”. Aschenbrenner was one of very few OpenAI employees who did not sign a letter calling for Sam Altman’s return after the board fired him. Many of those employees have since left the company.
Earlier on Wednesday, a group of current and former OpenAI employees issued a statement calling for stronger whistleblower protections for lab employees, saying many of them fear “retaliation” for raising concerns.
OpenAI did not respond to a request for comment.