OpenAI Figures Out Their Path Back to "Open"

Ahead of what is clearly the GPT-5 launch tomorrow, when all the oxygen will be sucked out of the AI news room, I thought it was worth pointing out a few things about OpenAI's "open source" launch this week.
First and foremost, OpenAI is "open" again! Obviously, that's not entirely true. But within the AI world, "open source" has come to mean "open weight", thanks in no small part to Mark Zuckerberg's endless rhetoric touting the former while meaning the latter for Meta's Llama models. That is, Meta was happy for anyone to see how the sausage was made, but not the ingredients that went into it. And again, this is now the pretty standard industry practice.
It wasn't always this way. And, in fact, OpenAI was seemingly set up to be the fully and actual "open" alternative to what Google was about to be doing after they purchased DeepMind back in 2014 (which Facebook/Meta also tried to acquire, by the way). But the reality of the world quickly set in. Not only did OpenAI find it was going to be hard to actually compete as a true non-profit – hence, Microsoft entering the picture – there also arose very real questions about open sourcing technology with such profound implications for society. Including some very negative potential side effects if anyone had full access to such technology.
And so true "open" went out the window, and partial "open" took its place. The last time OpenAI released any such model was six years ago with GPT-2. But now they're back again, with the industry seemingly coalesced around "open weight" models that are distilled from and/or offshoots of non-open flagship models becoming the standard industry practice.
Meta was trying to run a different playbook by open-weighting their actual flagship model, but it didn't work as they had planned and now they're spending billions of dollars to try to get back into the frontier model race towards AGI/Superintelligence. They're still saying they're committed to "open source" because they're undoubtedly going to run the same playbook of OpenAI and others here: locking down their flagship models while open-weighting subsets of those models.
To be clear, this is still important and potentially dangerous work. As Casey Newton writes in Platformer:
Conversations about AI safety often focus on what happens when models become more powerful. Perhaps equally important, though, is what happens when they become smaller. Until recently, AI models that could be used in service of cyberattacks or other mischief were only accessible via large platforms that could detect misuse and cut off the offenders. Over the past several months, though, frontier labs have released powerful models that can be run on a commercial laptop, without any oversight from the lab at all.
It's not just that these "open source" models are "open", it's that they're smaller variants which can be run on machines that don't need to be superclusters in the cloud. You can run the smaller variant of OpenAI's model on your MacBook. That's both awesome and potentially terrifying.
At the same time, they're limiting these even further by making them text-only, but text is probably powerful enough at this point to cause real problems if these models are state-of-the-art – which, they're close, if not quite there.
This is also all one hell of a counter-punch, at least narrative-wise, by OpenAI against Meta. As we all know, Meta is trying to poach seemingly anyone and everyone from their rival, and it's somewhat working and somewhat not. Sam Altman can counter with money, but more powerful might be the notion of making a splash in "open source" at the same time that Meta is likely moving away from it.
And in a related vein, this move puts OpenAI in a much more friendly position with those who care about such "open" elements of AI. When you're the overall market leader, as OpenAI clearly is right now, it's a small but important bit of optics.
I find all of this particularly fascinating given that Google just got hit hard for their own "open" promises around Android back in the day. It's a different market and different world, but it showcases that for all the obvious love "open" gets, it's also not so cut and dry – might I suggest Dario Amodei's thoughts on "open source" on the Big Technology podcast a couple weeks ago – and it likely won'e be for OpenAI – again – either.
