OpenAI Boardroom Battle: Safety First
When Move Fast and Break Things Fail. The AI Safety Schism within OpenAI.
OpenAI: Founded for AGI Safety
Let’s start at the beginning. OpenAI was founded in 2015 by investors, including Elon Musk, Reid Hoffman, Peter Thiel, AWS, and YC Research. The goal was to pursue Artificial General Intelligence (AGI) safely for the benefit of humanity. Their mission statement is simple:
Our mission is to ensure that artificial general intelligence benefits all of humanity.
There was an initial pledge of $1 billion, but the money that came in was $100 million from Elon Musk and $30 million from Open Philanthropy. The goal was to pursue an open-source approach to AI so everyone could see how AGI was being developed in public. The company needed a lot more capital, given that AI is capital-intensive.
It was apparent to OpenAI that bigger models would be one of the paths forward in AI research. If OpenAI wanted to be at the leading edge of that AI research, it would cost a lot of money. That’s where the for-profit arm came in.
The for-profit arm was introduced in 2019 and initially pitched as a “donation.” Investors likely didn’t quite see it that way.
The company’s DNA likely split in 2019 during the transition to for-profit. The company had been working for four years on AI research and was now pursuing a business model.
The recent dynamism of OpenAI has mostly been attributed to a classic Silicon Valley-style startup attached to an AI Safety think tank. In hindsight, it seems obvious that the two organizations would come to blows.
This is when they adopt the structure. This is a weird corporate structure—a hybrid of its non-profit past with a new for-profit but capped-profit entity. OpenAI, the public charity, controls and owns the GP structure, which controls the OpenAI global structure. Let’s talk structure before we discuss the board in more detail.
OpenAI Structure
Let’s examine the structure from the bottom to the top. At the bottom is OpenAI Global, the capped profit entity. The restriction on the capped profit company is that it must start to return a profit to the nonprofit after a 100x return in capital. Later-stage investors shouldn’t expect to receive such a return.
Microsoft, in particular, has tied its wagon closest to OpenAI. It’s a later investor, so it cannot make the ~100x return, but it invested $10 billion dollars in OpenAI, mostly in the form of cloud computing credits. The deal’s terms are not quite disclosed, but we all understand that Microsoft has the largest economic stake in OpenAI.
Next is the OpenAI GP. GP stands for General Partnership and is the structure where the general partner makes decisions for the entire Limited Partnership. This is the key vehicle of power, as this is how the Public Charity controls OpenAI Global, and because the charity owns the GP, the Charity controls OpenAI.
Finally, let’s talk about OpenAI, the public charity. This is the oldest company in this structure. Elon Musk and Sam Altman originally co-chaired it, and the board expanded as it expanded. In an effort not to recreate work, I enjoyed the board timeline that
did in his recent piece. I will talk quite a bit about the board from here on out.Board Governance: It’s a Revolving Door
This graphic summarizes the board changes that happened over the years.
Someone has left the board almost every year since 2016, but I think the central focus should be on 2023 because one-third of the board left within two quarters. This created a power vacuum that didn’t get filled. This would be the moment a power struggle had to be made, leading to Sam Altman being fired.
Let’s dive deeper into some of the players and the stage that set the scene for Sam Altman’s firing on Friday.
The Players and the Stage
I believe the key moment that created this boardroom drama is the quick succession of departures earlier this year. Reid Hoffman stepped down given his involvement in Inflection-AI but was privately unhappy about leaving. At the same time, Shivon Zilis was likely forced out, given her potential relationship with Elon Musk (Grok), as the mother to twins fathered by him. And lastly, Will Hurd stepped down given a presidential run. Here are the dates.
March 3rd - Reid Hoffman Steps down from OpenAI
March 23rd - Shivon Zilis Steps down from OpenAI
End of June - Will Hurd steps down
This moved the nine-person board to a six-person board. Of the remaining players, three board members are OpenAI employees and founders, and the other three are independent. The independent members leaned much harder in the AI safety camp than the OpenAI employees. But it would take Ilya to likely push the power dynamic to fire Sam and depose Greg.
I wrote a quick bio of each independent director so you could get to know them a bit better.
Adam D’Angelo is best known as the cofounder and CEO of Quora. He was formerly the CTO of Facebook until 2008, so he’s been deeply connected in Silicon Valley. Adam isn’t allegiance-free, as he’s mostly been focused on running Poe, a GPT wrapper.
Adam is an interesting character in this story. When he joined the board, he publically stated that he did so with AI safety in mind.
He’s deeply passionate about AI safety. He isn’t quite in the hardcore camp, but he’s more than willing to get in deep reply threads with AI-alignment people like Eliezer Yudkowksy. He’s against an AI research pause but has also signed a statement that AI is a "nuclear-level threat” to humanity. I would say he’s safety concerned but not a hardliner.
Helen Toner is a newcomer to the board. She joined in September of 2021 as a replacement for Holden Karnosfsky’s spot on the board. Holden’s board spot is because of OpenPhilanthropy’s early donation, but he was forced to resign when his wife started Anthropic.
She has written papers on AI Safety and Specifications at Georgetown's Center for Security and Emerging Technology. The specifications paper is interesting in particular. It’s focused on what the designer wishes and how the behavior achieves the outcome.
A paragraph encompasses some probable feelings she has about OpenAI. This is an example of how models go wrong, and likely, the recent releases of OpenAI’s store and GPTs might have felt against this paragraph's ethos.
An example of slower, more pernicious effects is how misspecification has already been implicated in harms caused by social media platforms. The business model of companies such as Facebook and YouTube uses machine learning systems to recommend content and keep users engaged on their apps. User engagement—as measured by time spent on the site, probability of clicking a link, or similar metrics—may seem like an innocuous enough objective for a machine learning model to optimize. In practice, however, it appears that disinformation or extremist content can often be highly engaging for certain subsets of users, meaning that a platform’s machine learning model learns to serve this content in order to keep customers active.11 This is an example of a divergence between the ideal specification—which would presumably be to maximize user engagement without radicalizing subsets of users—and the design and revealed specifications.
She also has ties to Tasha, as they both sit on GovAI. This organization itself is affiliated with OpenPhilanthropy, an early donor to OpenAI. You can guess where her loyalties lie.
Tasha McCauley is the CEO of GeoSIM Systems and has been a long-time board member. Her Twitter is private, and she doesn’t have many public appearances or interviews other than a YouTube video in 2015. She likely forms a block with Helen Toner, given their advisory seat on GovAI. She would have had ample opportunities to talk about OpenAI with Helen. It’s easy to see a clique.
Now, let’s talk about the OpenAI directors. First, we will start with Greg, Sam, and then the all-important Ilya.
Greg Brockman is one of the co-founders and the Chairman of the board of OpenAI. He was the first CTO of Stripe after he dropped out of MIT and left Stripe to join OpenAI in 2015. He’s known for his work ethic and ability to ship code.
Greg has been the most open about what happened. His timeline is our key information for the behind-the-scenes action at OpenAI. He dropped this tweet that Sam and him were blindsided. Sam was fired on Friday without Greg, and then Greg was told he was being removed from the board and subsequently quit.
Remember, this is a six-person board, so four people voted Sam and Greg off afterward. This could only have happened when the board was smaller and more aligned on AGI safety. It’s hard to imagine this amount of coalition building could happen when Reid, Will, and Shivon were on the board. And what’s more, new directors were likely in the pipeline, so it had to happen now. Let’s next talk about Sam.
Sam Altman got fired and ousted from the board and his spot as CEO of OpenAI Global. He’s the heart of the boardroom dynamics here. Let’s talk about his past before we speculate as to why.
Sam wasn’t always affiliated with OpenAI. Before OpenAI, he was probably best known as the president of Y Combinator, which he initially cofounded and funded OpenAI. He came to the space from a leadership perspective and not quite as a technologist. He’s had several angel investments go right, such as Airbnb and Stripe. During the shift to for-profit, Sam Altman became the CEO of OpenAI.
Sam has been on the board from the beginning. But Sam has also been involved in many different projects over time. It is quite a mystery why he left Y Combinator, but his firing from OpenAI likely had many reasons. The board’s stated reason was for lack of communication, and an internal memo confirmed no malfeasance. Our best guess about the lack of communication was a new venture to disrupt Nvidia’s hold on chips or miscommunication around OpenAI DevDay. It’s likely against the AI Safety board members’ interest to continue the push for GPT for all.
I have a theory that seems to fit with everything we know. Sam is the former head of Y Combinator, an incubator for startups. Sam is now on top of a legendary startup, OpenAI. In Sam’s mind, as much as AI safety was important, it’s time to move fast and break things, the motto of Zuckerberg’s Facebook. Meanwhile, you have the legacy OpenAI non-profit board focused primarily on AGI safety. The past of OpenAI’s non-profit nature and the future of their ChatGPT aspirations were clashing. And all they needed was a swing vote during a crucial power vacuum. That vote was Ilya’s.
Ilya Sutskever is the key vote that created the coup that ousted Sam and Greg last Friday. Ilya, like Greg, has been there since the beginning. He’s always been on the technology side and was the co-inventor of AlexNet and an author of the AlphaGo paper. He invented the sequence-to-sequence technology at Google Brain. He left Google Brain in 2015 to become the cofounder and chief scientist of OpenAI in 2015. OpenAI was pitched as an anti-Google, given the extreme concentration of AI research talent there. And its key focus was on safety.
In many ways, Ilya was one of the original pieces of OpenAI. But as the for-profit arm of OpenAI came to rule the organization’s mission, he likely started to feel uncomfortable with the “Move Fast and Break Things” ethos of the startup.
In July of this year, he introduced a “Superalignment” project, focusing on the problem of AI alignment before AGI happens. That seems starkly different from DevDay, which released DALL-E, web searching, and ChatGPT to as many people who could use it. So along comes this power vacuum, and he has a chance to get the organization focused more on safety than technological progress.
There are even some clues in Ilya’s Twitter. A few tweets come off as bitter, and the timeline would track with his four-month power play. While Sam Altman was presenting in conjunction with Microsoft, testifying to Congress in May, and pursuing a new venture, Ilya likely felt he lost track of what was important.
There’s even something akin to a warning tweet in October. Was Ilya saying that Sam valued artificial intelligence over human qualities, like alignment with humans?
Or maybe some of his feelings for the reason why Sam had to go was Sam’s intense public-facing image. Sam is the public face of OpenAI, and did Ilya think that was ego stopping OpenAI from growing?
We don’t know what exactly happened, but it’s clear that Ilya’s loyalty lies with AI Safety. The board goes from nine to six members in 5 months, and the last member leaves in July. This creates a four-month period of time to plot the coup. New board members will eventually be elected, so the AI Safety majority might not hold for long.
Ilya likely initiated firing Sam and kicking Greg off the board. He has the most power on the board outside Sam and Greg and has been there the longest. Ilya was the one who texted Sam for the board meeting, and I imagine the initiation of the event had to be spearheaded by the person who masterminded it all. He likely held the sway to convince the others that Sam didn’t have AGI safety as the first interest of OpenAI.
It is a pretty simple coalition. Helen is the newest group member and heavily favors AI safety. Tasha and Helen are likely in some friendship, so that could be an easy block. Adam probably had to be convinced separately, but Sam’s ambitions and the DevDay were possibly a catalyst. It probably takes a bit to get most of the board to harden on the decision, but they do it on Friday, November 17th. They fire Sam, and then Greg quits afterward.
This was all a powerplay by Ilya specifically centered around AI safety. This isn’t the first time at OpenAI either. Anthropic is a spinoff of OpenAI with a heavier AI safety focus, but this time, the focus on safety was from within. This was a power play by Ilya.
He had to be slightly embittered that Sam was pushing the company so fast without safety in mind or had some other disagreement we couldn’t understand. In Ilya’s mind, it was likely to refocus on the AI safety element that the company was founded on, not the commercial focus in recent years. It made sense to the board that the CEO of OpenAI wasn’t pursuing AGI in alignment with their mission statement. So it makes sense to boot him as CEO.
The problem is that boards govern organizations, and it’s clear that the organization doesn’t want to swallow the bitter medicine directed by the board. This move likely will not hold - and instead, Ilya will likely be getting the boot. Powerplays can backfire.
External Pressure on OpenAI: Incentives Win
Until now, this write-up is just on the state of the coup on Friday. But there’s always a countermove, and in this case, it looks like most of the organization and investors are in Sam’s camp. The OpenAI Organization looks like it’s in the process of reinstating Sam Altman, and key members, including Interim CEO Mira Murati, Chief Strategy Officer Jason Kwon, and Chief Operating Officer Brad Lightcap, are threatening to resign.
The board overstepped. It’s clear that all the stakeholders don’t agree, and in a power vacuum, Ilya and the AI safety camp pushed hard for something that the investors and employees of OpenAI didn’t want. OpenAI, the charity, is not the one driving the boat.
This Bloomberg article puts the situation into a good overview. But I think the key lever is going to be the employees. Sam Altman tweeted this, likely in response to the wave of employees quitting OpenAI in protest.
Meanwhile, a manager engineer at OpenAI is retweeting the “heart” emojis in response to Sam. This is likely a nice formal roll call of the employees who have threatened to quit. Hell, the CTO and new CEO of OpenAI “heart” emoji Sam in response.
The board made a blunder. OpenAI’s employees will likely get their CEO back by Monday, and Satya Nadella’s 10 billion dollars in Azure credits will have some vote in the future of OpenAI. What’s clear is that the board is grossly mismanaged. The current structure of a non-profit board without much else in place should not be running a critically important company like OpenAI. There needs to be a better system of governance in place. The board either needs to be strengthened or something else needs to change.
Just look at the turnover and lack of transparency on re-election. What’s more, the overwhelming resources of the investors in OpenAI likely favor them. Remember, Microsoft’s lawyers are sparring against the government and winning.
I think that Ilya will leave OpenAI when Sam is reinstated. He has to be the player who initiated the power play. The entire current board will leave, and a new board with fewer AI safety people (sadly) will be reinstated. I think this is the perfect opportunity for the structure to be defined. I would not be surprised to see the OpenAI charity and capped-profit structure flipped, with a formal board at the GP that becomes the real locus of power.
The boardroom move was amateur and sudden. And as much as boards have technical legal power, so do the organizations they rule. It’s all a construct, and the people of OpenAI will get their way. And hopefully, a better governance structure.
This is not to say that AI Safety isn’t important. But it’s stupid that a gap in the board leads to a hugely disruptive event. I believe that the longer-term problem of safe AI is important, but you don’t do that with sudden shakeups of the founder and an exodus of half of the employees. OpenAI has been doing something special for a long time, beating the likes of better-funded research organizations like Google or Microsoft. It’s probably in everyone’s best interest to keep the team together.
That’s all for today! This entire post is free because this is a take fest. But if you want more governance and incentive-focused writing, I write a second substack where I talk exclusively about this!
Mule’s Musing (name TBD) is where I and my co-authors
and write about topics like this. This piece, however, is just me today. Until next time. If you enjoyed this, please consider sharing. It helps me do more free pieces.
"A non-profit board should not be running a critically important company like OpenAI" - Potential counter-argument: decisions regarding the development of AGI shouldn't be driven by those potentially corrupted by profit motives.
Causing a disaster for a company is orders of magnitude less bad than potentially creating a disaster for society.
Good stuff. With how the world is scaling. AGI is less than 5 years away potentially. Drastic actions because of that thought