M.G. Siegler •

AI Agents Assemble!

OpenAI aims to usher out the Chatbot Era...
Sam Altman says helpful agents are poised to become AI’s killer function
Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

Chatbots? They're cute. Dumb and cute. It's time to get stuff done, people:

“What you really want,” he told MIT Technology Review, “is just this thing that is off helping you.” Altman, who was visiting Cambridge for a series of events hosted by Harvard and the venture capital firm Xfund, described the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.

There's been a steady drumbeat of 'Agent' talk for a few months now in the AI world – well, in some cases, years. And it feels like with some of the shine starting to wear off ChatGPT, now is the time. But don't take my word for it, take his:

"ChatGPT is not phenomenal. ChatGPT is mildly embarrassing, at best. GPT-4 is the dumbest model any of you will ever have to you again – by a lot."

To quote someone else, "If you don't like what is being said, change the conversation." In this case, quite literally.

Back to Altman, on AI hardware:

But Altman says there’s a chance we won’t necessarily need a device at all. “I don’t think it will require a new piece of hardware,” he told me, adding that the type of app envisioned could exist in the cloud. But he quickly added that even if this AI paradigm shift won’t require consumers buy a new hardware, “I think you’ll be happy to have [a new device].”

Nice to hear such honesty from... the largest backer of Humane. But really, both things can be true. Smartphones can be the main hubs for AI for the foreseeable future while there can also be new and interesting pieces of hardware that focus on AI. They just have to be simple, I think. And actually work.

Also, might we see someone on stage soon talking about AI on a certain smartphone?

On the topic of running out of data to feed the machines, Altman gives his most interesting, if entirely vague answer...

“I believe, but I’m not certain, that we’re going to figure out a way out of this thing of you always just need more and more training data,” he says. “Humans are existence proof that there is some other way to [train intelligence]. And I hope we find it.”

The current "hack" around this has been to feed the models with data created by... other models. Which is not a new concept, but seems problematic at scale for obvious reasons. So where might we go from here...

As for AGI:

Altman suspects there will be “several different versions [of AGI] that are better and worse at different things,” he says. “You’ll have to be over some compute threshold, I would guess. But even then I wouldn’t say I’m certain.”

Still feels like the definition of AGI will never be clearly defined – and hopefully not in a courtroom. And so some people will always believe it's here while others will always believe it's not.