AI Am Become Death
In a way, it feels like the endless talk about AI over the past few years has been leading up to this moment. The discussions have always ranged from 'is AI just a silly toy?' to 'is AI the end of humanity?' but such talk has flowed between intellectual backroom gatherings, to internet chat rooms, to dinner parties, to comment sections, to company town halls, to social media, and back again. Now here we are with the United States and several other countries actively engaged in armed confrontation – many would call this "war" though no one has declared it – and the AI debate swirls around it.
The timing is odd. We now know with certainty that last week the US was preparing to preemptively strike Iran. At the same time, the key cog in that machine, the Department of War (the artist formerly known as the Department of Defense), was also actively engaged in discussions with Anthropic – yes, the AI startup – over the usage of their models for purposes related to national security. I mean maybe, just maybe, punt the conversation until a better time?
But perhaps it's related, because the US knew these strikes were coming and that some level of Anthropic's technology would be used in them. And because there had been talk that Anthropic was questioning the use of their models in the raid to oust Nicolás Maduro from power in Venezuela a couple months prior, maybe the DoW wanted to get any usage squared away ahead of this new operation. Or perhaps Anthropic was pressing and the Pentagon, knowing what was in motion, got fed up. Or perhaps they viewed it as a potential point of leverage in such conversations. Who knows. Subsequent reporting will undoubtedly make all the timelines more clear once the smoke literally settles. But for now, this all just seems wild. As the US was ramping up to battle Iran, they were also battling one of their own technology companies.
Talking through the situation with Alex Kantrowitz on his Big Technology Podcast yesterday shook loose a few thoughts that I wanted to jot down. First and foremost, at the highest level, this all really might be as simple as the fact that the current administration and the leadership team at Anthropic, led by co-founder Dario Amodei, just really don't like each other. This disdain isn't a secret, there are plenty of public comments on the matter – in particular from administration officials noting their problems with Amodei's overall ideologies.
And in that light, it's even more strange that all of this is happening! Why on Earth is the DoW using Anthropic's models if they're so uncertain about the people building them – in control of them? The answer there may lay in their use of other technology from Palantir and Amazon, which integrates Claude. Or it may simply be that the technology is that good. Or, perhaps, the government has just been looking for the right excuse to shove them out the door. The timing wasn't great, but this squabbling over legal terms during a build up to war was the final straw.
Let's put aside Occam's razor here a moment, because there's also been an escalation of the argument to a far higher level: that this really is about private company rights, government control, democracy itself, and, of course, nuclear war.
Again, I suspect this is more likely a fairly straightforward culture clash and I think the fact that the Pentagon was so quick to sign a deal with OpenAI showcases that further. Again, in the middle of war, they're hashing out new contracts with AI players. Perhaps because all they really needed was someone they deemed more palatable to sign their existing contract. In walks Sam Altman...
The fact that OpenAI felt the need to quickly amend those documents – undoubtedly after rather immense and immediate public backlash – also points to this idea. Altman said he was simply trying to de-escalate the whole situation by, um, stepping in and taking over the contract from his biggest rival, but he also admits now how bad that looked. "Sloppy," as he puts it.
Though, naturally, they'll be keeping the contract...
Anyway, everyone – including Altman and Amodei – clearly wants to make this into something more than a contract dispute and an inevitable conscious uncoupling between two parties that simply don't like each other. And there are certainly interesting debates to be had there. But I also think it doesn't serve anyone's real interest to blow this completely out of proportion.
But that keeps happening with AI because it's AI. Depending on the situation and your own vantage point and bias, it's either the answer to or the problem with everything. AI will solve productivity. AI will displace jobs. AI will cure disease. AI will lead to more suicide. AI will free us. AI will enslave us. It literally slots in everywhere in both directions depending on the argument to be made.
It's the Rorschach um, tech.
Reading over the reactions to this latest brouhaha, it seems to me that it may be time to lay to rest one analogy that's very much top of mind and at the center of this again right now: that AI is the new nuclear weapons.
From Altman talking about the Manhattan Project. To magazines comparing him to Oppenheimer. To Amodei explicitly comparing selling NVIDIA chips to China as selling nukes to North Korea.1 The entire analogy has escalated too far. And it's clearly a big part of what is fueling this most recent debate.
But the comparison breaks down immediately at the most fundamental level. A nuclear weapon is just that, a weapon. It has one purpose – well, maybe two if you consider deterrence a purpose – and that is to destroy. Sure, we could argue that nuclear technology has other purposes, notably power, but come on, that's not the argument or comparison anyone is actually making here – aside from, historically, Iran! This is saying that AI is the biggest threat the world has faced since the advent of the atom bomb during World War II.
The difference, of course, is that AI has positive uses as well as negative ones. What is the positive usage of an atom bomb? Even if you want to say deterrence, that's decidedly the opposite of the usage of it. Ending the war with Japan? Sure, but that wasn't the initial goal and point of the project. It was simply to beat others – notably Germany – to ensure such power didn't end up in the wrong hands.
And that's exactly why the comparisons with AI keep getting drawn. Obviously, there are parallels in the build out of AI and the race to AGI – the atom bomb in this scenario. But again, AGI would presumably have good uses as well as bad – perhaps even profoundly so. Sure, some people view the race as ensuring that America gets control of such technology first for defensive reasons (which may shift into offensive reasons just as the aforementioned Department of Defense has shifted back into the Department of War). But most of those building it view it as simply trying to move technological and thus, societal, progress forward.
Yes, many disagree with those notions. But certainly no one would say that there aren't any good, positive uses for AI. Those who are so worried about it simply view the negatives as outweighing the positives – ranging from day-to-day usage to again, all the way up to the end of humanity. But even the so-called "doomers" would not deny the potential for positive usage too. Again, what is the positive use case of a nuclear bomb?
So can we please de-escalate from that analogy? It just makes everyone crazed – on both sides of such debates. Even now, it's what's guiding a lot of the back-and-forth around if a private company should have some say over what a government can or cannot do. The hypothetical is what if a company controlled a nuke? Or mass produced them?
That is, of course, illegal. But we're not talking about making the creation or advancement of AI illegal. We are talking about putting some level of guardrails in place, and yes, governments are undoubtedly far behind the ability to reliably to so simply because they act far too slowly and AI moves far too quickly. Still, no one wants to deem work on AI illegal – well, perhaps some do, but no one serious – and that is just not the case with nuclear weapons.
So if people really want to use that analogy, they should also be saying that the government should take over control of the build-out of the technology. Many think they should be more involved – including the companies themselves, not least of which because they need money and red tape cut – but no one (again, no one serious) is suggesting the government takes over full control of production.
And while some have suggested an actual Manhattan-like Project for AI, it's too late for that. Again, the technology is moving far too fast and any government would move far too slow.
I know it's tempting to use the nuclear analogy especially given the current conflicts – both actual conflicts around the world in which the US is engaged at the moment and the political ones happening within the United States itself – and the natural adjacencies. But it also just doesn't seem helpful, in part because of those actual conflicts. AI is not a nuclear weapon and we shouldn't portray it as such.
I don't know if the better analogy is electricity or the internet or some other profound breakthrough with both good and bad implications. And I want to acknowledge that it's entirely possible that AI ends up in a state where the bad does outweigh the good. I personally don't believe that will be likely, but it's certainly possible. But I do know there are no good variables with nuclear weapons. We now start wars over such beliefs...

1 Two sub-problems here. First, North Korea already has nuclear weapons, of course. Second, this splinters the analogy because here it's NVIDIA chips which are the nukes, not AI itself, which of course is the byproduct of those chips. ↩