An AI Call to Arms

Sam Altman wraps a plea in some grandiose (and vague) promises...
The Intelligence Age
In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.

Arguably the most interesting thing about this essay by Sam Altman is the 'why now?' of it. Is OpenAI on the verge of some new breakthrough? Is it a reflection on the 'o1' model launch? Is it to help close out the new funding round? Is it a post ahead of the announcement of that funding round? Trying to get ahead of the inevitable backlash from a startup raising $6B+? Did he just have some time on his hands after weeks/months of said fundraising and shipping o1? An attempt to "thought lead" by coining a term? Some combination of the above?

Most likely, it is timed with the upcoming UN General Assembly "general debate" set to open later today. And while it's wrapped in some provocative, if vague, proclamations, it really seems to be a call-to-arms for various world governments to steps up with the resources need to make "superintelligence" – the artist formerly known as "AGI" – a reality.1 Notably, resources beyond just money.

It also feels like Altman is using the opportunity to attempt to "own the narrative" of this cycle by coining a term/phrase: "The Intelligence Age". In recent years, Marc Andreessen has probably been the most effective at this,2 though Altman's mentor, Paul Graham, reminded everyone of his essayist capabilities by getting the world – at least the startup world – to talk about "Founder Mode" recently.3

A few passages to call out in Altman's Post:

Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.

A nice way to frame the current chaos swirling around AI into a context that puts it into perspective.

This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.

This is what will garner all the attention because Altman is seemingly putting a date on "superintelligence". In reality, there's quite a bit of couching and vagueness here still. Beyond the use of "possible", "a few thousand days" is a pretty wide range. I went ahead and asked ChatGPT what the date will be in a few thousand days and it gave me a very specific one: December 11, 2032.

That's because it's taking "a few thousand days" to literally mean 3,000 days. Which is the technical bare minimum "a few thousand days" can be, I suppose. Though I also suppose you could bucket in 4,000 or 5,000 or 6,000 or even more in there. Anything short of 10,000 days, really. That means anything before February 10, 2052 seems like fair game. Of course, the "it may take longer" couching basically negates any firm date. Anyway, it doesn't really matter as it's clearly just meant to put a rough timetable out there and not to draw a line in the sand. Sometime in the next century, Altman is "confident" we'll have "superintelligence" – however you wish to define that.

How did we get to the doorstep of the next leap in prosperity?

In three words: deep learning worked.

In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.

That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.

Again, "doorstep" would seem to be doing a lot of work here. But actually in the full arc of human history "a few thousand days" could be considered a "doorstep".

Regardless, this is also a nice, easy way to frame what is actually going on behind artificial intelligence as we know it right now, and as ushered in by OpenAI (thanks to the work of researchers within Google, who didn't leverage it internally because they really couldn't for reasons that don't at first seem obvious, but are actually quite obvious). By throwing enough compute at algorithms tailored to discovering the links behind words (and other data), we've done something profound.

Just how "profound" is also still up for much debate. Is this a magic trick or is this a path to recreating how our own brains work? Most everyone agrees this is not using the same mechanisms as the human brain right now, but can it lead us down that path? Or will it forge a new path? An intelligence that is not human intelligence, but a different kind, perhaps more powerful (one day) because it's not limited to the confines of a squishy neuron sponge in our heads? Or are we about to hit a wall with said algorithms and computing power? Will we run out of the energy required before we get there? Will the next breakthrough require augmenting this "intelligence" with other kinds, such as those gleaned from the "real world"? Can something truly learn without doing? How vital are physical senses? You quickly fall down philosophical rabbit holes...

Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.

If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

Beyond, again, the coining of a phrase, this would seem to be the real meat of the meal. It's buried, but it's there. A call-to-arms for governments to help with the infrastructure required. OpenAI can raise all the money in the world – nearly quite literally, seemingly, from the largest companies in the world – but if governments aren't willing to help with said power production and other resources, the ultimate goal of giving such technology to everyone cannot be achieved. Obviously, this is a self-serving call ahead of said UN General Assembly meeting. But it beats the alternative: what the EU is doing. It also beats, I believe, the other calls in this general direction.

I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.

Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.

Just, you know, some modest goals and expectations. That said, I think it's fine to run the risk of over-promising and under-delivering here because the stakes are high enough (and again, the vague timelines for all this dampens any under-delivery risk).

I honestly don't love Altman's closing because it seemingly gets too much into the political weeds – but it's clearly to head off the major criticism coming to counter the call: that AI is going to take all of the jobs from the people the politicians serve. The theme of this year's UN General Assembly debate is: "Leaving no one behind: acting together for the advancement of peace, sustainable development and human dignity for present and future generations". And so here you go:

As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.

Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.

While I don't love it as a closing here, I also firmly believe in the above. We can debate if AI is going to mean the end of the world in some various science fiction scenarios. What it will not do is take all of our jobs. As Altman notes, it will undoubtedly be disruptive to some labor markets, but that's always been true of every technological breakthrough dating back centuries or longer. Humans adapt and figure out how to use such tools to augment their lives and work and ultimately make them better. There's no reason to believe this revolution will be any different. That doesn't mean it won't be bumpy and complicated and hard in various places. But it's also inevitable and it would be dumb not to embrace and get ahead of such questions and issues now.

And I suspect we'll figure out how to leverage AI to help with various jobs even faster than we have with other technologies in the past.


Update: Altman wasn't the only one aiming to get ahead of the UN meeting...

The Fellowship of the AI
9 companions: the US teams up with the biggest American AI firms…

⛑️
Further thoughts on AI and jobs...
People at a Premium
AI will change Hollywood -- for the better
Amazon Finds a Path for AI to Augment Creatives
Audible’s AI voices can scale audiobooks in a way that wasn’t happening…
John Wick Chapter AI
Lionsgate cuts a deal to work with AI rather than against it…
Google Isn’t Lazy, It’s Timid. As It Sadly Must Be.
Provocative soundbites aside, it’s obvious why Google isn’t OpenAI…
Strangulation or Regulation?
World’s first major act to regulate AI passed by European lawmakersThe European Union’s parliament on Wednesday approved the world’s first major set of regulatory ground rules to govern the mediatized artificial intelligence at the forefront of tech investment.CNBCKaren Gilchrist,Ruxandra Iordache Oh boy: The European Union’s

1 "Superintelligence" seemingly has less baggage than "AGI", though still some new baggage thanks to a certain group that broke away from OpenAI!

2 Though nothing has topped 2011's "Software is Eating the World" -- wild that this was already 13 years ago.

3 Which was derived, of course, from a talk Brian Chesky gave -- another person close to Altman. Altman has clearly long modeled his own essays on those of Graham, and they stepped up when he took over YC from him. But now Altman has a reach far wider than Graham ever has.