OpenAI Generates More Turmoil

It's probably time to change that non-profit structure for good...
OpenAI Generates More Turmoil

In late 2015, the blog post announcing OpenAI to the world was short and sweet. At just over 600 words, the post kicks off with:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.

That post had two named authors (neither of them being Sam Altman): Greg Brockman and Ilya Sutskever. As of this week, neither of those individuals are currently working at OpenAI. To be fair, Brockman says he's going on sabbatical and will be coming back. But that's the same basic thing everyone kept saying over and over again about Sutskever after the boardroom coup nine months ago. Once the dust settled there – if it ever actually settled – we had Sutskever starting a new company, one which sounded a lot like the original mission statement of OpenAI.

As every single person in the world knows by now, OpenAI is a weird company. And a lot of that would seem to stem from its inception as a non-profit, which now seems more problematic with each passing day. The company's blog post lists 7 "founding members" alongside Brockman (CTO) and Sutskever (Research Director): Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. And the two co-chairs: Elon Musk and Altman. Let's check-in on those team members cited in that post:

The authors:

  • Greg Brockman – just went on sabbatical after 9 years at OpenAI
  • Ilya Sutskever – left OpenAI a few months ago after 8.5 years

The founding members:

  • Trevor Blackwell – left OpenAI in 2017 (after just under 1.5 years)
  • Vicki Cheung – left OpenAI in 2017 (after just over 2 years)
  • Andrej Karpathy – left OpenAI in 2017 (just under 2 years) and then went back in 2023, but only stayed for just under a year in stint two.
  • Durk Kingma – left OpenAI in 2018 (just over 2 years)
  • John Schulmanjust left OpenAI (almost 9 years)
  • Pamela Vagata – oddly (?) doesn't list OpenAI on her LinkedIn, perhaps because she was there for less than a year
  • Wojciech Zaremba – still at OpenAI

The co-chairs:

  • Sam Altman – became CEO in 2019 after the initial Microsoft investment, was briefly ousted from the company last year, but back in place now
  • Elon Muskquit the board in 2018

So of those 11 named individuals, only 2 still are currently working at OpenAI (again, with Brockman on sabbatical – we'll see).

That's remarkable – in a bad way – for a company as important as OpenAI has grown to become. But the reason why this has come to pass would seem to be right there in that original blog post. This is just not the same company that it was at the outset. And sure, that's true of most companies which must evolve along the way. But most companies also don't start out by planting such idealistic and stringent flags in the ground. It has come back to bite them. Sure, OpenAI is still, in a way, a non-profit. In that it hasn't yet turned a profit.

That founding team signed up for a mission that not only had nothing to do with the business, it was opposed to having anything to do with the business. And those early years were undoubtedly hard, perhaps as a result, which probably has something to do with 5 of those 11 leaving in 2017 or 2018. (Musk was a different story, of course.) But the 6 that remained (minus Musk but with Karpathy coming back) would seem to point to something else. The fact that 4 of those 6 have left in the last few months says... something.

Part of it is clearly still fall-out from the Altman ouster and board shake up last year – certainly Sutskever. But the others now leaving in what should be the 'heyday' of OpenAI is not a great sign. Maybe it's as simple as being burned out (which Brockman implies, of course) or maybe it's related to the massive secondaries which have undoubtedly enriched many early team members. But again, this was never supposed to be about the money, so what's going on?

I'll leave the reporting to the reporters, but just trying to read between the lines a bit, this still all would seem to stem from that original non-profit status and the clearly intense tension between what OpenAI originally set out to do and what it's actually doing now. Some might say that the overall goal is still the same, and that the means to get there simply had to change, but clearly several of those original team members disagree as many of them are now working at other startups (or their own) which are going after the same, original goal of OpenAI.

The fact that so many have gone to Anthropic, a startup borne out of this same tension, must be a particular kind of sting for those that remain. Especially since, unlike with Sutskever's 'Safe Superintelligence', Anthropic has a similar group of investors pumping money into the company.

But why are we seeing another exodus now? Again, this could simply be coincidental. But it may also be tied to the whispers that the company is closing in on actually morphing into a for-profit entity. And perhaps that has something to do with Musk re-filing his lawsuit against his former company after dropping it just a couple months ago (he may also just may have wanted to avoid an embarrassing dismissal and re-filed in a way more likely to yield results).1

It's also just an extremely turbulent time to be a cash-incinerating AI startup at the moment. Many of OpenAI's wanna-be peers have been finding homes in the form of "hackquisitions" rather than trying to raise the billions required to push through the current cycle. Microsoft presumably will not let OpenAI run out of cash, but that relationship is also clearly tense and getting more complicated by the week.

It seems likely that OpenAI is going to need a new major cash partner at some point. Maybe they hoped that would be Apple. Or maybe it will be the Saudis. Or even the ultimate backstop: SoftBank.

At the same time, there are growing whispers that the next generation of models led by OpenAI's GPT-5 are taking far longer to produce than had been hoped. Everyone is still hoping to see something before the end of the year, but events are being put in place for the fall with expectations being set that it won't be there. Mark Zuckerberg has said not only on the record, but on an earnings call, that he expects their next generation model, Llama 4, to take upwards of 10x the computing resources to train. This was clearly a signal to the market about their cost expectations, but it may have also been a signal to the market about what OpenAI is facing without the many billions in profit generating business that Meta has on the side. Take that, "non-profit".

Further, there's now a growing chorus – yes, led by Zuckerberg himself, so grains of salt applied – that the "open" models are starting to win out over the closed variety. Yes, OpenAI makes the closed variety. Yes, this is perhaps why 80%+ of the founding team no longer works there.

Where is this all going? How does this all end? Impossible to know, of course. In a "normal" world, Microsoft would have bought OpenAI during the board coup. But this isn't a normal world with regard to M&A and it seems like it will not be anytime soon. So that puts a lot of companies and people in tricky spots. And none more so than OpenAI as the biggest and highest profile of the AI startups.

They've continued to stand apart with great talent and great product work, but are those strengths now at risk as well? Is AGI even still in the cards? Or will it be more products that can actually be used and subsequently sold as the company continues to morph into more of a standard one, with standard business goals? That would seem to be the core question. We may be approaching an answer...

⬇️
Other Spyglass posts from the past few days...
Dear Zuck
This shit is embarrassing…
The Excuses Change, the Hackquisitions Remain the Same
Google totally doesn’t acquire Character.ai
Can Apple Make Premium Podcasts Happen?
SiriusXM either thinks so or hopes so or just hey, why not?
Well This is Awkwafina
Meta aims to resurrect celebrity AI voices right after killing off others?
Let’s Not Pass ‘Go’ Just Yet...
A few thoughts on the Google monopoly ruling…

1 As Musk's colorfully lawyer put it, "The previous suit lacked teeth — and I don’t believe in the tooth fairy. This is a much more forceful lawsuit."