The AI Wall

Hopefully and probably just more of a temporary plateau...
OpenAI, Google and Anthropic Are Struggling to Build More Advanced AI
Three of the leading artificial intelligence companies are seeing diminishing returns from their costly efforts to develop newer models.

The whispers that had been building for months behind-the-scenes are now seemingly spilling into the open via sources within the various key players in AI: the rapid pace of breakthroughs, at least when it comes to to LLMs, are slowing. Fast. The Information first hit upon this a few days ago with a report detailing OpenAI's "strategy shift" away from GPTs as the underlying tech plateaus. But the more interesting element is that all of their competitors seem to be hitting the same wall, all at once.

As Rachel Metz, Shirin Ghaffary, Dina Bass, and Julia Love report:

OpenAI isn’t alone in hitting stumbling blocks recently. After years of pushing out increasingly sophisticated AI products at a breakneck pace, three of the leading AI companies are now seeing diminishing returns from their costly efforts to build newer models. At Alphabet Inc.’s Google, an upcoming iteration of its Gemini software is not living up to internal expectations, according to three people with knowledge of the matter. Anthropic, meanwhile, has seen the timetable slip for the release of its long-awaited Claude model called 3.5 Opus.

Everyone has known for months that the front-end shift from chatbots to agents would be happening around now, but perhaps surprising is that the underlying technology wouldn't be full-steam-ahead on all fronts. As they note about OpenAI:

OpenAI was on the cusp of a milestone. The startup finished an initial round of training in September for a massive new artificial intelligence model that it hoped would significantly surpass prior versions of the technology behind ChatGPT and move closer to its goal of powerful AI that outperforms humans.

But the model, known internally as Orion, did not hit the company’s desired performance, according to two people familiar with the matter, who spoke on condition of anonymity to discuss company matters. As of late summer, for example, Orion fell short when trying to answer coding questions that it hadn’t been trained on, the people said. Overall, Orion is so far not considered to be as big a step up from OpenAI’s existing models as GPT-4 was from GPT-3.5, the system that originally powered the company’s flagship chatbot, the people said.

And so instead, we're entering the 'agentic' world with stabilizing LLMs being augmented by newer "reasoning" models. Which may actually be a good thing. Everything has been moving at such speeds that led to pure chaos in terms of product development. And the end-users were seemingly starting to sense this. A breather in the breakneck pace is probably a good thing. The question is when and how the next breakthroughs happen. Synthetic data, it seems, only got us so far.

These efforts are slower going and costlier than simply scraping the web. Tech companies are also turning to synthetic data, such as computer-generated images or text meant to mimic content created by real people. But here, too, there are limits.

“It is less about quantity and more about quality and diversity of data,” said Lila Tretikov, head of AI strategy at New Enterprise Associates and former deputy chief technology officer at Microsoft. “We can generate quantity synthetically, yet we struggle to get unique, high-quality datasets without human guidance, especially when it comes to language.”

As mentioned, "reasoning" will help as will the feedback loops with the agents themselves. But it may take more multimodal data from the real world brought in by devices out and about to break through such walls. Or maybe something else entirely. Questions around costs and just how much scale and power (quite literally) is actually useful with 100,000+ GPU "supercomputer" datacenters seemingly being built left and right are going to come up quick, one imagines.

Does all of this lead to a world where an Apple, taking it nice and slow, ends up with the last laugh – having not spent tens of billions of dollars for an AI build-out when they can just leverage the work everyone else has done and focus on the productization of the technology? Or is this pause in progress just a blip and they'll be left even further behind as the technology gets increasingly siloed and hoarded away by all the players trying to justify said costs? Including, perhaps "open source" Meta. Ideals have a funny way of flying out the window when Wall Street is knocking on it.

Speaking of, what might this blip mean for NVIDIA?

For now, let's just hope the current plateau features technology good enough to make those agents seemingly about to be all around us actually useful.