Sam Altman was 'outright lying to the board', says former board member
In an interview with TED, Helen Toner said that four OpenAI board members "couldn't believe things that Sam was telling us"
In an interview with the TED AI Show, former OpenAI board member Helen Toner made a series of dramatic accusations about the misconduct of CEO Sam Altman.
Toner accused Altman of “outright lying to the board”, to the extent that the board “couldn’t believe things that Sam was telling us”.
Notably, Toner said that when Altman tried to push Toner off the board, he did so by “lying to other board members”. That matches previous reporting from the New York Times and Wall Street Journal.
She gave several other examples of Altman’s deception. Toner said Altman didn't inform the board about the launch of ChatGPT, didn't tell the board that he owned the OpenAI Startup Fund (despite “claiming to be an independent board member with no financial interest in the company”), and “gave [the board] inaccurate information” about OpenAI's safety processes.
Toner also said that executives at OpenAI came to the board accusing Altman of “psychological abuse” and saying they couldn't trust him. In an op-ed for The Economist this week, Toner and fellow ex-board member Tasha McCauley said “senior leaders had privately shared grave concerns with the board, saying they believed that Mr Altman cultivated ‘a toxic culture of lying’”.
Toner said that some executives have “since tried to … minimise what they told us”, seemingly referring to Mira Murati, who downplayed her concerns after the New York Times reported that those worries are part of what motivated the board to oust Altman. In the TED interview, Toner said that the conversations with executives were “really serious”, with executives sending the board “screenshots and documentation” demonstrating Altman “lying and being manipulative in different situations”.
When asked why many OpenAI employees supported Altman’s return, Toner attributed it to fear. “It’s really important to know … how scare people are to go against Sam,” she said, adding that employees “were really afraid of what might happen to them” due to having seen Altman retaliate against critics previously.
In a statement given to TED, OpenAI board chair Bret Taylor failed to address Toner’s specific claims. “An independent committee of the board worked with the law firm WilmerHale to conduct an extensive review of the events of November,” Taylor said, which “concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers or business partners”. In their Economist op-ed, Toner and McCauley noted that this report was not made available to “employees, the press or the public”.
Toner’s interview follows several weeks of scandals for OpenAI, including an exodus of safety-minded employees, a spat with Scarlett Johansson, and reports showing that OpenAI executives were aware of highly restrictive exit documents.
Courtesy of TED, here’s the full transcript of this part of the podcast.
Bilawal Sidhu: So Helen, a few weeks back at TED in Vancouver. I got the short version of what happened at OpenAI last year. I'm wondering, can you give us the long version?
Helen Toner: As a quick refresher on the context here, the OpenAI board was not a normal board. It's not a normal company. The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company's public good mission was primary, was coming first over profits, investor interests, and other things. But for years, Sam had made it really difficult for the board to actually do that job by, you know, withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board.
At this point everyone always says, like what? Give me some examples, and I can't share all the examples, but to give a sense of the kind of thing that I'm talking about, it's things like, you know, when ChatGPT came out, November 2022, the board was not informed in advance. We learned about ChatGPT on Twitter. Sam didn't inform the board that he owned the OpenAI Startup Fund even though he, you know, constantly was claiming to be an independent board member with no financial interest in the company. On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change. And then, you know, a last example that I can share, because it's been very widely reported, relates to this paper that I wrote, which has been, I think, way overplayed in the press.
Bilawal: For listeners who didn't follow this in the press, Helen had co-written a research paper last fall intended for policymakers. I'm not gonna get into the details, but what you need to know is that Sam Altman wasn't happy about it. It seemed like Helen's paper was critical of OpenAI and more positive about one of their competitors, Anthropic. It was also published right when the Federal Trade Commission was investigating OpenAI about the data used to build its generative AI products. Essentially, OpenAI was getting a lot of heat and scrutiny all at once.
Helen: The way that played into what happened in November is pretty simple. It had nothing to do with the substance of this paper. The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board. So it was another example that just really damaged our ability to trust him, and actually only happened in late October last year when we were already talking pretty seriously about whether we needed to fire him.
And so, you know, there's more individual examples and for any individual case, Sam could always come up with some kind of innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever. But the end effect was that after years of this kind of thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. And that's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company. Not just like, you know, helping the CEO to raise more money. Not trusting the word of the CEO, who is your main conduit to the company, your main source of information about the company is just totally impossible.
So, that was kinda the background, the state of affairs coming into last fall, and we had been working at the board level as best we could to set up better structures, processes, all that kind of thing to try and, you know, improve these issues that we had been having at the board level. But then in, mostly in October of last year, we had this series of conversations with these executives where the two of them suddenly started telling us about their own experiences with Sam, which they hadn't felt comfortable sharing before, but telling us how they couldn't trust him, about the toxic atmosphere he was creating. They used the phrase “psychological abuse,” telling us they didn't think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change. No point in giving him feedback. No point in trying to work through these issues.
They've since tried to kinda minimize what they told us. But these were not like casual conversations. They were really serious, to the point where they actually sent us screenshots and documentation of some of the instances they were telling us about, of him lying and being manipulative in different situations. So, you know, this was a huge deal. This was a lot. And we talked it all over very intensively over the course of several weeks, and ultimately just came to the conclusion that the best thing for OpenAI's mission and for OpenAI as an organization would be to bring on a different CEO. And once we reached that conclusion, it was very clear to all of us that as soon as Sam had any inkling that we might do something that went against him, he would pull out all the stops, do everything in his power to undermine the board, to prevent us from even getting to the point of being able to fire him. So we were very careful, very deliberate about who we told, which was essentially almost no one in advance other than obviously our legal team. And so that's kind of what took us to November 17th.
Bilawal: Thank you for sharing that. Now, Sam was eventually reinstated as CEO with most of the staff supporting his return. What exactly happened there? Why was there so much pressure to bring him back?
Helen: Yeah. This is obviously the elephant in the room and unfortunately I think there's been a lot of misreporting on this. I think there were three big things going on that help make sense of what happened here.
The first is that really pretty early on, the way the situation was being portrayed to people inside the company was, you have two options. Either Sam comes back immediately with no accountability, you know, totally new board of his choosing, or the company will be destroyed. And, you know, those weren't actually the only two options. And the outcome that we eventually landed on was neither of those two options. But I get why not wanting the company to be destroyed got a lot of people to fall in line. Whether because they were in some cases about to make a lot of money from this upcoming tender offer or just because they loved their team, they didn't want to lose their job, they cared about the work they were doing. And of course, a lot of people didn't want the company to fall apart, us included.
The second thing I think it's really important to know that has really gone under-reported is how scared people are to go against Sam. They had experienced him retaliating against people, retaliating against them for past instances of being critical. They were really afraid of what might happen to them. So when some employees started to say, you know, wait, I don't want the company to fall apart, let's bring back Sam, it was very hard for those people who had had terrible experiences to actually say that, for fear that, you know, if Sam did stay in power, as he ultimately did, that would make their lives miserable.
And I guess the last thing I would say about this is that this actually isn't a new problem for Sam. And if you look at some of the reporting that has come out since November. It's come out that he was actually fired from his previous job at YCombinator, which was hushed up at the time. And then at, you know, his job before that, which was his only other job in Silicon Valley, his startup Loopt, apparently the management team went to the board there twice and asked the board to fire him for what they called deceptive and chaotic behavior. If you actually look at his track record, he doesn't exactly have a glowing trail of references. This wasn't a problem specific to the personalities on the board, as much as he would love to portray it that way.
Bilawal: So I had to ask you about that, but this actually does tie into what we're gonna talk about today. Open AI is an example of a company that started off trying to do good, but now it's moved on to a for-profit model, and it's really racing to the front of this AI game, along with all of these ethical issues that are raised in the wake of this progress. And you could argue that the OpenAI saga shows that trying to do good and regulating yourself isn't enough. So let's talk about why we need regulations.
Helen: Great, let's do it.
…
[At the end of the episode, Bilawal reads the following statement from OpenAI board chair Bret Taylor]
We are disappointed that Ms. Toner continues to revisit these issues.
An independent committee of the Board worked with the law firm Wilmer Hale to conduct an extensive review of the events of November. The review concluded that the prior board’s decision was not based on concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.
Additionally, over 95% of employees, including senior leadership, asked for Sam’s reinstatement as CEO and the resignation of the prior board.
Our focus remains on moving forward and pursuing OpenAI’s mission to ensure AGI benefits all of humanity.