"A non-profit board should not be running a critically important company like OpenAI" - Potential counter-argument: decisions regarding the development of AGI shouldn't be driven by those potentially corrupted by profit motives.
Causing a disaster for a company is orders of magnitude less bad than potentially creating a disaster for society.
>the structure of a nonprofit board (majority rules / no committees / not much more info than that) should not be running OpenAI.
I do agree wrt profit motives. I meant more than anything else the actual board construction is shoddy.
In fact I edited it to be more clear. I'm pretty sympathetic to the AI safety argument, but man the board comp / structure / how we got here is lacking.
Good article, and thank you for the shout-out! I would add one note when thinking about Ilya's motivations -- I don't know if it was necessarily about *safety*. The OpenAI charter is about working toward AGI. While the GPT-class of technologies is cool, many researchers (perhaps including Ilya) doubt that that's actually the right path toward AGI. There's some chance that all the emphasis on LLMs might be viewed as *irrelevant* commercialization, a distraction from the AGI-focused mission of the nonprofit.
A little bit of speculation on my part, but would guess the super alignment timing makes me *think* it is. Clearly a contrast between non profit past and openai whatever it is today
Thank you for the board timeline omg saved me so much time.
"A non-profit board should not be running a critically important company like OpenAI" - Potential counter-argument: decisions regarding the development of AGI shouldn't be driven by those potentially corrupted by profit motives.
Causing a disaster for a company is orders of magnitude less bad than potentially creating a disaster for society.
Hmm I guess I wish I could restate it to
>the structure of a nonprofit board (majority rules / no committees / not much more info than that) should not be running OpenAI.
I do agree wrt profit motives. I meant more than anything else the actual board construction is shoddy.
In fact I edited it to be more clear. I'm pretty sympathetic to the AI safety argument, but man the board comp / structure / how we got here is lacking.
Good stuff. With how the world is scaling. AGI is less than 5 years away potentially. Drastic actions because of that thought
perfect content for the name of this blog
Ha ha
Good article, and thank you for the shout-out! I would add one note when thinking about Ilya's motivations -- I don't know if it was necessarily about *safety*. The OpenAI charter is about working toward AGI. While the GPT-class of technologies is cool, many researchers (perhaps including Ilya) doubt that that's actually the right path toward AGI. There's some chance that all the emphasis on LLMs might be viewed as *irrelevant* commercialization, a distraction from the AGI-focused mission of the nonprofit.
A little bit of speculation on my part, but would guess the super alignment timing makes me *think* it is. Clearly a contrast between non profit past and openai whatever it is today
Thank you for the board timeline omg saved me so much time.
Bang up job as always.
@Doug O'Laughlin Too Powerful [Alt]man, No [Alt]ernative AI Boss-Profile
a story about Samuel Harris 'Sam' Altman, point of view from backend programmer
https://prada.substack.com/p/too-powerful-altman-no-alternative
Excellent writing Doug! 🔥
Then this Bloomberg story doesn’t make sense to me:
OpenAI Staff Threaten to Go to Microsoft If Board Doesn’t Quit
- Majority of OpenAI employees sign letter seeking new board
- Board member Ilya Sutskever is among the signatories
Why is Ilya asking for a new Board?
I mean now the board is a 4 member board and he’s the minority vote? He has no one to build against it if he has regrets.