M.G. Siegler •

OpenAI Needs to Build Google Cloud Before Google Can Build ChatGPT...

Or: how to burn $100B+ in real time...
OpenAI Needs to Build Google Cloud Before Google Can Build ChatGPT...

Just about a year ago, OpenAI was circulating some numbers that suggested the amount of cash they'd need to burn through 2029, at which point they'd flip to being cash flow positive, was a lot. Like a lot, a lot. Like $30B.

And so, a few months later, when they declared their intent to raise $40B it was a spectacle, but not exactly a surprise. They needed to raise roughly that amount of money at some point!

To be clear, it's still completely incredible to raise that amount of money – more than has been raised in any IPO, ever – in one round, even over a few tranches. And even Anthropic's new mega fundraise – $13B! – months later, can't hold a candle to the amount. But now I have to assume even OpenAI's number is going to be dwarfed.

By OpenAI.

That's because new projections, updated from last year and obtained by Sri Muppidi of The Information suggest that OpenAI is no longer planning to burn around $30B through the end of the decade. It's going to be more. How much more? Oh, you know, $80B more.

That is not a typo. OpenAI is now projecting their go-forward cash burn will be around $115B before flipping to cash flow positive – which is now projected to be pushed back by a year, to 2030, per the reporting.

The numbers are such that after burning $9B this year (up from $7B originally projected), and $17B next year (up from $8B), they would need to raise again in 2027 at the latest, when burn would ramp to $35B (up from $20B). And it's that next year, 2028, when they project peak burn, at just under $50B (previously, peak burn was projected to be 2027's $20B). Burn starts to turn in 2029 now, when they'll go through a mere $8B. And then cash flow positive to the tune of $38B in 2030.

All of this suggests that in that next hypothetical fundraise (or fundraises), OpenAI would need to raise well north of $50B. Perhaps as much as $80B, just to be safe. Which is a funny sentence to write when you're talking about burning $115B over the next five years. Hell, let's say they should raise $100B, just in case.

Now, obviously, they could go public before these full cash burn needs come to pass. And they perhaps would even prefer that since the public markets will offer them more varied and less dilutive paths to access such capital. I say "less dilutive" knowing that actually none of the money raised to date has technically been dilutive, because given their still-in-place non-profit status, they've only been selling rights to shares in future profits. (After Microsoft gets paid, of course.)

But if the company is able to convert into a Public Benefit Corporation (PBC) and convert those profit promises into equity, well, they're going to be giving up massive chunks of equity overnight. Perhaps 33% of the company to Microsoft. Maybe another 12% to SoftBank. That already leaves just 55% of the company, of which the non-profit is undoubtedly going to have a large chunk. With Khosla's early investment and all the massive recent checks led by Thrive and others, the company is getting whittled down quickly – and that's without Sam Altman owning a stake! And we won't even bring up the possibility that Elon Musk gets awarded some shares here... It's complicated.

With all that in mind, if they raise $100B in the next couple of years, even if they do so at $1T – you laugh now – things are getting tight, is all I'm saying. Even if they can raise that amount privately – and they probably can – they also probably need to go public. Or be forced to deal in massive loads of debt, which is always dangerous, but perhaps far too much at such heights.

Can they go public? Many will scream "no way" given the not just the lack of profitability, but the sink hole to the center of the Earth they're creating with their burn. But at the same time, the revenue projections keep growing too. As does the popularity of the app. And their mindshare as the leader in AI at the moment. Which is to say, of course they can go public. And likely will.

Anyway, I sort of just wanted to say – type – all of that out loud to try to wrap my head around it all. Now you may wonder why they suddenly need to burn $80B more than previously anticipated? It's a fair question. Certainly some of it is due to their success – which is to say that because ChatGPT continues to prove more popular and thus, more used, than they've been projecting, it keeps costing more to you know, run it. There's nuance and wiggle room there – certainly some of the recent GPT-5 changes just rolled out seem related to trying to make their scaling more scalable. But one big issue has always been that OpenAI doesn't control their own cost fate, because they've run from the get-go on Microsoft's servers.

Now that is slowly shifting. With Microsoft not only allowing them to run on other clouds, but practically begging for it to happen – undoubtedly so they don't have to ramp their own CapEx for two of the largest companies in the world. And so the main benefactor of OpenAI has shifted from Microsoft to SoftBank. But, of course, SoftBank doesn't run their own massive cloud like Microsoft. Hence, the Stargate Project was born – out of the ashes of a project OpenAI had planned to do with Microsoft!

And therein likely lies the key for those costs. OpenAI, with SoftBank, is going to try to build their own at-scale cloud on the fly. They've noted there could be a way to allow others to use Stargate data centers in the future, thus offsetting some of their costs, but for now, they need all the capacity they can get. Which is why the first Stargate is really a complicated and convoluted series of partnerships spearheaded by Oracle in Abilene, Texas. And actually, that too was born out of other ashes – a failed deal with Elon Musk for xAI, no less.

Are we having fun yet? Beyond the undoubtedly crazy costs – much of which SoftBank is supposedly paying for – for power, water, and the like, they also need chips. A lot of chips. Chips from NVIDIA. And not only are those expensive, they're not even available at the scale at which OpenAI will need them just to run ChatGPT. And so there's a tangled weave of yet more partners being used to try to get access to enough chips.

And all of this will undoubtedly continue at the next Stargate site. And the next one. And so on. You're starting to see how costs can spiral out of control. OpenAI is trying to go from paying to use other clouds to building their own, but beyond the usual CapEx costs, they have to do so with data centers being built at a scale never done before. And while all of that is presumably to save costs in the long run – real shades of web companies trying to build their own clouds so as not to have to pay AWS back in the day – there's still going to be that one massive cost going forward: NVIDIA.

Even with the Stargates built out, OpenAI is presumably going to have to pay for the latest and greatest NVIDIA chips every year to stay on the cutting edge of AI. Unless, of course, they figure out a way to make their own chips too. Which is exactly what they're believed to be working on – which has been all-but confirmed by their would-be partner in such an undertaking: Broadcom.

OpenAI is hardly alone here. Everyone is looking at Google right now and what they've been able to do with their in-house TPUs used to train Gemini. To be clear, Google partners with and uses NVIDIA chips too, but clearly their TPUs have now come into their own for various aspects of AI. Which is undoubtedly why they're rumored to be selling the chips to other clouds now for external usage.

Amazon has been doing the same thing with the Trainium chips, just with less success to date. But if their partnership with Anthropic yields results, the entire industry might be happy because it signals another option beyond NVIDIA.

Which, of course, no one would ever dare say, less they piss off NVIDIA and potentially hamper access to those all-important cutting-edge chips. But as the onus shifts from training to inference, everyone is thinking that there can be more of a hybrid approach here, one not solely reliant on NVIDIA. Well, everyone but NVIDIA is thinking that, as they want to own inference as well.

Yes, yes, CUDA complicates all of this. And many are working on ways to try to be compatible with the software layer that is the standard. But the high level point remains that everyone is trying to diversify away from a total reliance on NVIDIA. And if you look at the OpenAI situation, which is the extreme version, you can clearly see why. I mean, they're going to burn over $100B over the next five years. And that's in no small part because they haven't controlled their own cloud and their own chips.

If they can execute on their crazy spend plan and get not just one, but many Stargates up and running at scale, it will all be worth it. But the issue right now is that Google is already building Gemini with their own cloud and their own chips at scale. That would seem to be a problem – certainly if any sort of market turn happens and it becomes a challenge to raise, say, $100B.1


1 I would say that Anthropic is in a similar boat except that they've seemingly been able to better align with Amazon (and to a lesser extent, Google). Microsoft should have been the one using Azure and building their own chips (which they're naturally doing too, of course) to bolster OpenAI here. With that context, it's crazy how the relationship has played out. Microsoft may very well end up owning 33% of a company they're massively competing with.