Meta's "Open"-ish New AI Models
10 months after their nearly $15B deal to buy Scale AI and reboot/restart their AI efforts, it seems like we're about the see the first true fruits of labor out of the Meta Superintelligence Lab. The first models are supposedly a bit past deadline and undoubtedly over budget, but wait, is there a wrinkle?
Meta is preparing to release the first new AI models developed under Alexandr Wang, with plans to eventually offer versions of those models via an open source license, Axios has learned.
Why it matters: Meta has been the largest U.S. player to let others modify its frontier models, and there has been growing speculation the company might retreat from that strategy altogether.
Before openly releasing versions of the new models, Meta wants to keep some pieces proprietary and to ensure they don't add new levels of safety risk, according to sources.
I don't know, that reads like "eventually" is doing a lot of work there. And if that's the case, that's not surprising at all. Meta will release some "open" version of their new models... eventually. This is the exact same strategy that Google, and OpenAI, and pretty much everyone else follows. They ship their frontier models and products, then sometime down the road, they release smaller, less capable open source variants. OpenAI was late to this particular party, but finally ran this playbook last year. Google just released their latest version of "Gemma", their "open" version of Gemini.
They all do this to keep their key work under lock-and-key, using the blanket of safety as the (undoubtedly at least partially legitimate) rationale. Meta will be no different here.
I guess the angle is that Meta is still going to do "open" at all after the "Llama" debacle. But it seemed like they were always going to – that's how their teams would say with a somewhat straight face that they weren't abandoning "open", that it would always be a part of the playbook. As I wrote last July:
The open source strengths that Zuckerberg touted, turned out to perhaps be weaknesses when it came to competing at the highest end of the market. It doesn't mean open source (again, open weight) is bad – it just means Meta's strategy here may have been flawed. And I suspect where they'll net out is the same strategy that Anthropic, Google, and soon OpenAI are doing. That is, keep the cutting-edge models closed and open source older and smaller models more selectively.
Bingo. Back to Fried:
The move fits with Wang's view that Meta can be a force for democratizing access to the latest AI technology and ensuring that there is a U.S.-made option that is open for developers.
Wang sees Anthropic and OpenAI as increasingly focused on delivering their models to governments and the enterprise. By contrast, Meta's effort is focused on consumers, per sources. Meta wants its models distributed as widely as possible around the world.
Again, there are already US-made options for developers, the smaller, less performant "open" models. That's different from Meta's previous strategy with the Llama models, where they were "open" from the get-go. That strategy, of course, did not work. Hence Meta needed to spend $15B on Scale and billions more on other, fresh AI talent.
By the way, the US angle here is clearly meant to counter the rise of "open" models coming out of China. But even there, the large companies seem to be moving into a more proprietary posture. Why? To try to make the technology work as an actual business and to win the AI day. As it turns out, there are downsides to "open" – Meta learned this the hard way when some of those Chinese models seemingly used Llama as a base model from which to distill their own. Of course, some of those model makers may have been doing this with OpenAI models as well – and not the "open" ones. We'll see what the next version of DeepSeek looks like, which also seems pretty delayed at this point.
The more interesting element may be the notion of focusing on consumers. This, of course, has been OpenAI's strength. But while they're not abandoning it, they're clearly taking their foot off the consumer product to slam on the enterprise gas, in order to counter Anthropic. So perhaps Meta does have a bit of an opening here...
The leaders aren't standing still. Both OpenAI and Anthropic are hinting that their next models, also expected to drop soon, represent significant advances.
Meta knows its new models may not be competitive across the board with the coming ones from those labs, but believes it will have areas of strength that appeal to consumers, the sources said.
Billions and billions spent to get a model out the door that doesn't compete at the frontier? Of course, Microsoft is in the same boat, promising that they simply got a late start to the frontier game (thanks for nothing, OpenAI) but that they're closing in. But again, it's not like OpenAI and Anthropic – let alone Google – are just going to hit pause and let everyone else catch up.
And while xAI previously seemed to prove the ability to catch up was doable with enough money burned and corners cut, it hasn't really mattered. Will it for Meta?
Meta argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook and Instagram — free services with global scale that competitors can't easily match.
But that was the same strategy with Llama. Again, it didn't work. So will these new proprietary models change the equation? I doubt it. But we'll see!
One more thing: might Meta keep the 'Llama' naming scheme around for the "open" variants of their models? Unclear. They may just want to move away from the tainted brand, but it was always a cute/clever name. The new fruit codenames – Avocado, Mango etc – just seem to be copying OpenAI – though I still wish they would have actually shipped those names!


