M.G. Siegler •

'Strawberry' Fields of Research

OpenAI picks a decidedly better codename for Q*
Exclusive: OpenAI working on new reasoning technology under code name ‘Strawberry’
The Strawberry project was formerly known as Q*...

There's both seemingly a lot and frustratingly little in this report from Reuters:

ChatGPT maker OpenAI is working on a novel approach to its artificial intelligence models in a project code-named “Strawberry,” according to a person familiar with the matter and internal documentation reviewed by Reuters.

The project, details of which have not been previously reported, comes as the Microsoft-backed startup races to show that the types of models it offers are capable of delivering advanced reasoning capabilities.

Teams inside OpenAI are working on Strawberry, according to a copy of a recent internal OpenAI document seen by Reuters in May. Reuters could not ascertain the precise date of the document, which details a plan for how OpenAI intends to use Strawberry to perform research. The source described the plan to Reuters as a work in progress. The news agency could not establish how close Strawberry is to being publicly available.

How Strawberry works is a tightly kept secret even within OpenAI, the person said.

I'm confused why if Reuters saw this internal document in May, they're just reporting on it now? Presumably they were doing additional reporting to back up what they saw? But given how much seems to leak out of OpenAI, that's sort of a surprisingly long timeframe? Also, all of this remains rather vague and reads more like half-stories relayed via games of telephone.

The document describes a project that uses Strawberry models with the aim of enabling the company’s AI to not just generate answers to queries but to plan ahead enough to navigate the internet autonomously and reliably to perform what OpenAI terms “deep research,” according to the source.

This is something that has eluded AI models to date, according to interviews with more than a dozen AI researchers.

Yes, this is "agentic" behavior, which everyone already is well aware that OpenAI (like everyone) is working on. No less than Sam Altman talked about this, on the record, in May. Back to this latest report:

The Strawberry project was formerly known as Q*, which Reuters reported last year was already seen inside the company as a breakthrough.

Two sources described viewing earlier this year what OpenAI staffers told them were Q* demos, capable of answering tricky science and math questions out of reach of today’s commercially-available models.

You may recall that Q* – perhaps the most comically awful name to pick, especially if you want to keep this under-the-radar – was the project that got a lot of heat last year as reports had some people inside OpenAI worried this was some sort of major AGI breakthrough. In fact, it was what was initially blamed for Altman's ouster (subsequent reports placed the blame on a whole range of issues). Conspiracy theories sprung. Wild, I know.

Back to reason, and the topic of the problem of AI reasoning:

Strawberry is a key component of OpenAI’s plan to overcome those challenges, the source familiar with the matter said. The document seen by Reuters described what Strawberry aims to enable, but not how.

In recent months, the company has privately been signaling to developers and other outside parties that it is on the cusp of releasing technology with significantly more advanced reasoning capabilities, according to four people who have heard the company’s pitches. They declined to be identified because they are not authorized to speak about private matters.

All this, plus the recent revelation of OpenAI's 5-level AI scale to determine where the technology stands in human-level problem solving, seems to point to the company closing in on the next big unveil. Obviously, everyone is waiting on GPT-5 – but we're also still waiting on the release of the vocal computing breakthrough in GPT-4o. The Scarlett Johansson fiasco may have delayed things, but presumably not this long?

Strawberry has similarities to a method developed at Stanford in 2022 called "Self-Taught Reasoner” or “STaR”, one of the sources with knowledge of the matter said. STaR enables AI models to “bootstrap” themselves into higher intelligence levels via iteratively creating their own training data, and in theory could be used to get language models to transcend human-level intelligence, one of its creators, Stanford professor Noah Goodman, told Reuters.

“I think that is both exciting and terrifying…if things keep going in that direction we have some serious things to think about as humans,” Goodman said. Goodman is not affiliated with OpenAI and is not familiar with Strawberry.

Well, at least "Strawberry" is a much juicier and delicious codename. This above quote mixed with 'Q*' would have the internet breaking out in hives again.

Strawberry Fields Forever...