M.G. Siegler •

The Endless Rebranding of AI

From AI to AGI to Superintelligence to... brands trying to make Granular Superintelligence happen
The Endless Rebranding of AI

There was a time when good old "AI" was enough. While the term largely lurked quietly in the background of computing since it was coined in the 1950s, it saw a bit of a boom in the 1980s. But by the time IBM's 'Deep Blue' system beat Garry Kasparov at chess in 1997, it started to explode into the lexicon. Soon after, we had a major motion picture directed by Steven Spielberg (at the behest of Stanley Kubrick, no less) that was not just about the concept, but was titled as such. Around the same time, Larry Page laid out a succinct vision for why artificial intelligence would be the future of Google in a remarkably prescient video in 2000.

These days, it feels like the terminology is evolving just as fast as the technology.

One might argue that IBM itself started to degrade the value of the "AI" term with their endless 'Watson' hype after yet another real world win, this time on Jeopardy! in 2011. But when it came time to actually leverage the technology to do useful things such as fixing healthcare... not so much. But beyond that, another problem was that the term started to quickly evolve to encompass too many things. So we started to go more granular with Machine Learning and Deep Learning. But that was really just setting the stage to come back around to the high level concept of a computing system that could perform tasks just as a human would. And so "AGI" – "Artificial General Intelligence" was born to convey specifically that.

Of course, with the rise of LLMs the past few years, AGI itself has become a muddled mess of definitions. And this became a real issue because companies were not just being built around the premise, but legal contracts and partnerships were being formed around the definition of the technology. A definition which was, as it turns out, actually quite undefined.

When Microsoft was structuring their original deals with OpenAI, clearly the team from Satya Nadella on down thought that the world was so far away from "AGI" that they were okay not worrying too much about that term – "The Clause – in those contracts. As Steven Levy wrote about recently for Wired:

There are two conditions that must be satisfied for OpenAI to deny its technology to Microsoft. First, the OpenAI board would determine that its new models have achieved AGI, which is defined in OpenAI’s charter as “a highly autonomous system that outperforms humans at most economically valuable work.” Fuzzy enough for you? No wonder Microsoft is worried that OpenAI will make that determination prematurely. Its only way to object to the OpenAI board’s declaration would be to sue.

But that’s not all. The OpenAI board would also be required to determine whether the new models have achieved “sufficient AGI.” This is defined as a model capable of generating profits sufficient to reward Microsoft and OpenAI’s other investors, a figure upwards of $100 billion. OpenAI doesn’t have to actually make those profits, just provide evidence that its new models will generate that bounty. Unlike the first determination, Microsoft has to agree that OpenAI meets that standard, but can’t unreasonably dispute it. (Again, in case of a dispute a court may ultimately decide.)

Altman himself admitted to me in 2023 that the standards are vague. “It gives our board a lot of control to decide what happens when we get there,” he said.

And as those two sides have drifted apart, AGI and its definition is unsurprisingly at the center of battle of the future of OpenAI. Oh yes, and Elon Musk is using the term in his own lawsuits against the company he co-founded.

All of this has undoubtedly contributed to 'AGI' drifting towards the newer 'Superintelligence' term. Latching on to the phrase (and title) from the book by Nick Bostrom a decade ago (though the term long predates that), many now might argue that it's a different term with the notion that while 'AGI' conveys human-level intelligence, 'Superintelligence' conveys well, just that – intelligence beyond human capabilities. But plenty of others view the two terms as interchangeable. Here's an excerpt from an article published two days ago in The Wall Street Journal:

After all, the definition of AI superintelligence, or AGI—as the industry calls it—is at the heart of a high-stakes disagreement between Microsoft and OpenAI, and control over the technology being developed by Altman’s startup.

And really, on some level, without actual, concrete definitions for such things, it's all just marketing anyway.

As the 'AGI' issue became more of a point of clear tension in public between Sam Altman and Satya Nadella – with the former continually hinting that we're getting close to the 'AGI Moment' and the latter brushing aside such talk as "nonsensical benchmark hacking" – Altman switched to using 'superintelligence' more in his own talking points (though even more recently, he's gone back to using AGI quite often again, which is interesting...).

Around the same time, yet another OpenAI co-founder, Ilya Sutskever, left OpenAI to form a new startup with the new term right in the company name: Safe Superintelligence. It's almost as if he knew that others were about to co-opt the term and so he wanted to make it clear what their intent was going to be: not just creating 'Superintelligence', but creating 'Superintelligence' in a safe way. Subtle.

Now Meta has entered the mix. Whereas Mark Zuckerberg's company seemingly didn't care too much about 'AGI' previously, now that Llama has stumbled, he's rebooting their AI efforts to be all around, what else? 'Superintelligence'.

And he's spending, well, whatever it takes to get the 'Meta Superintelligence Labs' up to speed – including poaching Sutskever's co-founder and the former CEO of Safe Superintelligence, Daniel Gross. Wild times in 'Superintelligence' land.

At the same time, Zuckerberg doesn't want to make it seem like Meta is simply entering a race already well in progress towards the goal. So he's shifting the goalposts a bit with a new splinter term: 'Personal Superintelligence'.

He was hitting the new talking point early and often in a conversation with Jessica Lessin of The Information last week:

You know, I think the most exciting thing this year is that we’re starting to see early glimpses of self-improvement with the models, which means that developing superintelligence is now in sight, and we just want to make sure that we really strengthen the effort as much as possible to go for it. Our mission with the lab is to deliver personal superintelligence to everyone in the world, so that way we can put that power in every individual’s hand. And I’m really excited about it. It’s a different thing than what the other labs are doing.

At first, you might have thought that he was just using "personal" as a framing for his viewpoint on Meta's approach. But no, he's trying to brand it:

I think personal superintelligence is a different vision than what the others are going for. It is kind of like when the internet was first getting to scale, people were like, all right, is the internet going to be for productivity?

Is it going to be for entertainment, or is it going to change how we work at our companies? And the answer is, all.

I think the same is going to be true for AI, and there are going to be different companies that focus on different parts of it. So far, I think that, you know, you hear a lot from the other labs around wanting to automate all of the economically productive work in society.

And again, later on:

And I think the reality is that there’s a lot of great stuff to do there, and I’m excited about that. And I think when you put personal superintelligence in people’s hands, some of what they’re going to want to do is direct it at grand problems. But I actually think a lot of what people care about are just relatively simpler things in their lives.

And part of our ethos and values have always just been to try to put that power into people’s hands directly.

And that’s what we’re going to try to do with personal superintelligence. So it is a bit of a different focus, you know?

I'm not sure I do know, but whatever. Again, it's just marketing.

But wait, wait. Not to be outdone, we shift our attention back to Microsoft. Where Mustafa Suleyman, the co-founder of DeepMind, now the head of Microsoft's consumer AI efforts, is on a seemingly never-ending publicity tour. And this go-around he has a new talking point – mere days after Zuck's new terminology! And it's branding that only Microsoft could love: 'Humanist Superintelligence'.

In an interview with Tim Higgens for The Wall Street Journal:

Rather than worrying about creating some AI-god that we have to worry about squishing us all out of existence, he’d rather look to the technology as a tool to solve hard social problems—energy, education, medical care, food systems.

“That’s what I call humanist superintelligence,” Suleyman said. “And that’s probably why you find me more focused on those things rather than on sort of AGI or superintelligence for its own sake.”

Again, AGI and 'Superintelligence' are sort of interchangeable, and vaguely defined. But 'Humanist Superintelligence', that's Microsoft's thing.

Left unsaid by Suleyman, Microsoft actually cannot work on AGI, legally. That pesky OpenAI contract also says as much, which sheds light on his talking points over the past year or so that Microsoft is happy to cede the "frontier" model work to OpenAI while they focus on building distilled, smaller, specialized models. They're happy to do that because they have to do that. But 'Superintelligence' – sorry, 'Humanist Superintelligence' – may give them a new opening to work at the frontier, even before OpenAI declares AGI. Which is totally different, you see.

One more thing: don't forget 'Apple Intelligence'! (How could you?) Also not AI. Not at all. Don't you dare compare them.


👇
Previously, on Spyglass...
The Killing of a Sacred Late Show
The message sent by the end of ‘The Late Show’ was obviously always going to be political even if it was driven by financial reasons…
Meta’s AI Relativity Theory
Mark Zuckerberg defends his shift in strategy with AI…
Open Source AI was the Path Forward
...until it wasn’t for Meta