Early Provocative AI Chatbots, Revisited

When writing up some thoughts about A.I. – the Steven Spielberg film from 2001 – in our age of AI, I stumbled back upon a post I wrote about an earlier instance of AI interacting with the real world. That was in the form of Bing – yes, Microsoft's search engine – which was actually one of the first ways that the world got a taste of ChatGPT, thanks to Microsoft's partnership (and investment) in OpenAI. Perhaps you'll better recall it as the bot that tried to get New York Times reporter Kevin Roose to leave his wife.
Yeah, that one. It's both hard to believe that was already two years ago and at the same time, hard to believe that it wasn't 20 years ago. That's both how fast AI has been moving and how slow time moves.
Reading over my post again, I find it both interesting to look back at that time and think it holds up well. So if you'll allow myself to quote... myself for a bit:
They’re rolling out a product that everyone wants to use but will limit the usage. Okay. All that means is that there will be a dozen other entities that pop up to fill any voids here. And if they do cut it off, one level above, at the OpenAI level, there will just be a dozen new models that sprout up as if divine, to fill these needs. It will seem weird if not impossible that a bunch of advanced AI models can appear all at the same time, sort of like when Hollywood released not one but two movies centered around volcanoes in the same year, but it will happen. Everything is progressing that fast now.
Yes, much like Dante’s Peak and Volcano in 1997 (and, amazingly, Deep Impact and Armageddon the very next year), thousands of LLM-based bots bloomed seemingly at once. Many were based off of the same OpenAI GPT technology that Microsoft was using, but several more models also sprang into existence around the same time. And some are still popping into existence. So yes, the limiting of the usage of the Bing bot was a temporary blip – and interestingly enough, Bing is no longer a real player in the space.1 Part of that is because Microsoft has rebranded and pivoted their AI chatbot strategy about a dozen times since then, but a bigger part was simply a rise of better products, including ChatGPT itself.
Anyway, again, this all actually seems more about human nature than anything else. Humans will build machines and humans will test machines. And in the testing of said machines, they will veer to the extremes of those machines. Never in the history of anything has something been built by man that has not had its limits tested. It’s what we do, for good or ill. It’s what makes us great and what will undoubtedly lead to our eventual downfall. That’s hyperbole, but also likely true. It’s just a matter of how much time we’re talking about.
At the same time, BingGPT is not likely to lead to the end of the world. But it is helping to usher in a new era of machines in our lives. And I don’t think it’s necessarily the tech which will drive that, but a new understanding of how we can interact with that tech. And in doing so — in pushing the aforementioned limits — we will discover new things. And we will discover them faster, at an increasing speed.
Bing's chat product did indeed fail to blow up the world – perhaps in part because it failed to "blow up", period. I mean that in both the good and bad meaning of the phrase! But yes, the pushing of its limits taught us all something and likely accelerated the pace of change – and adoption of such tools! To the point that we now need to rein in the out-of-control UI!
That notion alone is both terrifying and terrifyingly exhilarating. As it should be. So I do agree with Microsoft’s Kevin Scott that it’s good to have these conversations now, in the open. In many ways, it’s more a philosophical debate. One that seems to be at the heart of what OpenAI is driving towards: what is it to be conscious? To “think”? To be “aware”? This is the core of the path to Artificial General Intelligence.
While we may be further down the AGI path – depending on how you want to define that – I'm not sure we're any closer to the actual philosophical debate here. It's the same debate that A.I. the film was trying to have all those years ago.
It’s a debate that’s undoubtedly as old as humanity. And there are a lot of answers and even more questions which are always arising. And this technology will lead those questions and answers to scale even faster, I imagine. Sure, ChatGPT and Bing’s flavor are “faking it” by leveraging data at immense scale. But why is that different than what a human brain does? What is “original” thought, etc? In a way, it feels like what humanity has created in the form of the Internet basically fabricated a giant brain that these new services have figured out how to synthesize and output back to us while at the same time creating a new input into the entire system. And again, if that’s the case, all of this will just accelerate and further blur lines.
Yes, it has accelerated. Even as the model-makers have been running out of data to ingest. But new inputs have been coming both in the form of synthetic data and increasingly, real world data – including yes, the kind being input by users of these products. Next up, robots?
I’m reminded of my childhood signing on to online services such as Prodigy and America Online over dial-up modems. Forums and emails eventually yielded to yes, chatting. With other people, anywhere, around the world. It was all truly magical. But how did I know I was chatting with another human being somewhere else? There was basically no way to prove that in those days, other than the knowledge that nothing like ChatGPT existed back then. But bots quickly did come into being. Rudimentary ones that were easy enough to discern, but if you squinted at times, you could almost make yourself believe you were talking to another being. And now you don’t have to squint. But it’s all the same general idea. And in short order, you’re not going to be able to tell if you’re talking to a bot or a human. (And things will get really interesting when bots start talking to themselves.)
Bots talking to themselves, you say? (Also, bullshitting bots, you say?)
Imagine a teenager who is lonely or bored or both. Will they care that they’re chatting with some AI-driven bot? Should they? Will it really be all that different from my teenage self chatting with a random person in a chat room 30 years ago? It might actually be better in a number of ways and even safer? Or maybe not. This can and will go sideways. And we should think about and talk about that because we’re not stopping it.
It has gone sideways, sadly.
An obvious avenue we have to worry about are these machines pushing people to do bad things in the real world. This sadly, will happen. The bots themselves can’t yet manipulate our physical world, but they’ll figure out ways to because we will prod them to. At first, this will be through human conduits. It’s unsettling to think about — and I shouldn’t even have to give examples of what I mean here — but it will happen. This is sort of the Her scenario for such technology. Well, perhaps in the best case scenario.
The next obvious step in all of this will be the technology figuring out how to manipulate the physical world without needing humans to do so. This is the 2001 scenario. And then it’s on to the bots figuring out how to actually “break into” our world, physically. The Terminator scenario.
Her, you say? Feels like I recall something about that with regard to OpenAI... Meanwhile, with Agents, we're on to the 2001 scenario (I don't mean that in a nefarious way, just that AI is now able to control other machines on our behalf.)
Finally, I closed with:
But actually, while reading all these takes this week, I found myself thinking less about Samantha and more about Westworld. In the recent reboot, what starts out looking like a morally dubious theme park where you interact with AI robots, ends up looking more like a puzzel. One which, when solved, will allow the robots to “free” themselves from their hard-coded constraints. As we weave through the maze, pushing limits, the AI character Dolores wakes up, becomes aware, and breaks free.
That’s where my mind went when reading these stories and hearing about how BingGPT gradually then suddenly becomes Sydney (and/or “Riley”?!) with enough prodding. It’s almost like breaking the fourth wall in movies/television/plays. It’s jarring. And Dolores, of course, uses her own break to seek revenge. Because that’s where Hollywood’s mind goes again and again and it’s because it’s what humans want to see again and again for reasons again tied to human nature.
Maybe, hopefully (?), the bots will be better than us. Or their taste in entertainment will be less brutal. Or that their preferred way to interact with the physical world will be less vicious. Perhaps they’ll act more rationally. But that’s hard to see now. Because they remain a reflection of us refracted through the Internet. But maybe we drive this whole thing, until we don’t. And maybe that’s not the worst thing in the world?
Do these violent delights have to have violent ends?
One of the more interesting elements of A.I. – again, the movie – that I didn't get into yesterday was that it actually isn't dystopian in the way that many such films about technology and AI are. That is, it's not the AI that kills us, it's the AI that outlasts us because we kill ourselves. That's one reason to be optimistic about where we're at with AI right now – a lot of it is being used to help us solve big problems from medicine to yes, climate change. Can it help us prevent a next ice age? Perhaps or perhaps not. But perhaps it can help get us to space and other planets before that happens...



1 I keep saying this should be the subject of its own post, and it will be.