Love It If We Made It
Seemingly my entire social feed is filled right now with people sharing "Something Big Is Happening", an essay by entrepreneur Matt Shumer. I think it's a pretty good overview of the current situation in AI meant to be read by the layperson so that they can share the thoughts for discussion around the proverbial dinner table. And actually, so they can get prepared for what's coming and take action. To that end, he kicks off by framing it as a pandemic-like situation, where the world is about to change. Time to stockpile that toilet paper and hold on to your butts.
It's mainly good because it's digestible – unlike, say, Anthropic CEO Dario Amodei's latest essay, "The Adolescence of Technology", which Shumer cites, and which is about 20,000 words. Shumer's post clocks in closer to 5,000 words. So "short" that he actually published it on Xitter – which remains just a truly awful place to read anything longer than 140 characters, but I digress...
My read on this read is that it's a bit too alarmist, but still a useful thought exercise for most people. I mainly say that because my belief is that for as fast as all of this is moving – and to be clear, it is moving insanely fast – I suspect the ramifications will still take far longer to play out. I make this "cold water" prediction quite often these days, but it's nothing profound or particularly insightful, it's simply the way most things play out, certainly with technology.
Yes, the pandemic swept in and changed things faster. But that's part of why it's not a great analogy here. Is the world about to change in a couple weeks? No. At the same time, is the world about to change in more long-term ways than it did from that pandemic? Yes.
After reading the post, I actually followed Shumer's advice and had AI do my work for me. My "work" here being to respond to this essay – how meta! I asked Claude – and specifically, Opus 4.6, one of the breakthrough models that Shumer cites – to study some of my previous writing and write a response in my style. That response, which I'll paste below these words that I actually did write – I swear! – is pretty good! I don't think it nails my exact style, but it has its moments – and honestly, others can probably judge that better than I can, as I'm max biased in this case. It makes some good counter points, including about the pandemic analogy, which may have shaped my own paragraph above! I'd like to think I would have said that regardless, and perhaps the AI just was able to predict that, but... how far down this rabbit hole do we want to go here?!
Anyway, I laughed at points. ("The virus didn't need a Substack.") And was generally impressed by the output.1 I often find this to be the case with Claude, which I've recently been working into my daily routine alongside ChatGPT and Gemini (yes, I pay for each, to constantly test them, which is also Shumer's advice, which I definitely agree with, though it's certainly fine for most people to just pick, and pay for, one to test).
Wait. I should back up.
As I'm suggesting above, as strange as it may seem, I actually haven't used AI to write something for me in my own style before. I mean, I think I did in the very early days of ChatGPT to see what might happen, but it was pretty bad and rudimentary at that point. In the intervening years, I've both never had the urge to do this or felt the need. It's not that I'm afraid of doing it, and not even really that I think it's below me (though yes, I do), I just really don't see the point. Because to me, as yes, I've written about before, the point of writing is just as much about the process of doing it as much as the output. Actually, I think it's far more about that process, which is my big takeaway from our current AI revolution and this latest experiment.
Yes, AI could write a rebuttal for me. And yes, it can be quite good! But what is the point of that? Just to put something out there? To what end? I guess maybe if I wanted a quick and easy way to "thought lead". But even then, to what end? I wouldn't actually be "thought leading" because they wouldn't actually be my thoughts! They may look and/or sound like them, but that doesn't change the fact that they're not them in that I didn't actually think them. That might not matter to others, but it matters to me!
Because again, what I get out of writing this is from the very process of writing it! The thinking about it! Forming thoughts and letting my mind wander. Expressing my actual opinion on a matter, not outsourcing that thinking to technology.
Sure, I guess if I wanted to make a quick buck in monetizing those thoughts in some way, I could do that. But we have a word for that, it's called spam.
All of this points to some thoughts I had around the whole "Moltbook" situation. We're in a world where bots can talk to other bots, and I think that's interesting and eventually useful for all of the "agentic" stuff we want and need AI to do for us. But there remain a lot of things that you're going to want to do for yourself. Not because an AI can't do them, but because you actually derive value from doing them. To me, writing is the best example of that. But there are many others. And we're going to increasingly discover them in this new world we're entering.
Said another way, and to harp on the point I keep making, the inputs matter just as much as the outputs, and in the Age of AI, they're probably going to matter even more!
The clear impetus for Shumer's post is that he's a developer and his "holy shit" moment was realizing that that OpenAI's latest GPT-5.3 Codex model could fully do the coding work he needed, from writing, to testing, to deploying. This is clearly where AI is going to have the biggest and most immediate impact on our world. It's already happening in the fact that, as Shumer notes, the AI is being used to write the AI applications themselves. You don't have to extrapolate out too far to see a world in which AI starts improving itself, and this is the "breakaway" moment that Amodei and others have been talking about and warning us about. It will happen and it is something we need to watch closely, obviously.
But the disruption of the day-to-day code writing seems unlikely to play out as seamlessly across other industries. As many have noted in the past, AI is uniquely suited to write code because of the way LLMs work. Other jobs and just jobs-to-be-done will likely require other variants of AI that perhaps aren't as probabilistic.
Even still, I might argue that if there's no value that developers get out of the input of coding – actually doing it – perhaps it's better if it is automated away? I suspect some developers do derive value from coding though. So they might want to do it anyway? Or it might be a hybrid situation where they do the parts they want – perhaps the creative parts – and they let AI do the rest. This has already been happening, of course. And if there is a coding job to be done that can simply be automated away with a few commands, that's probably for the best for everyone aside from maybe the entry-level coders who just spent a lot of time learning a specific programming language. That sucks for them, but I also suspect they'll go on to find other and better uses of their time!
In my own world, I think about email. I've always hated it and would love not to have to do it. So I will gladly outsource that task to AI if and when the technology is up for it. But even then, there will be parts that I want to and/or need to do so that the knowledge from some of that work is in my own brain.
The above probably applies to some legal work as well (another example Shumer cites and is obviously hot right now in the world of AI). There's the tedious document reviews that a human probably doesn't need to or want to be doing. But there will undoubtedly be other legal work that humans actually want to be doing. Maybe AI could do some of it, but if the lawyers actually derive value from it, maybe it's worth the cost. The cost being time and perhaps a lack of cost savings for the law firm (which, admittedly, is another complicated matter).
In general, we will need to find new business models for many jobs. But that has been the case throughout much of history.
Here's where I'll note that I think Shumer's strongest point, and one that resonates with me the most, is about how best to situate your children for this new world. The answer, of course, is that nobody knows. But to that end, I think his point about making sure that kids are malleable enough to adapt, both in their education and eventually in their work, will be critical. Because it's not like things are just going to change and that will be it. The situation – the world – will keep evolving. Again, it's not a pandemic-like situation. It's perhaps more like climate change.
This is all a long-winded – though less long-winded than Shumer's post, which itself was less long-winded than Amodei's post – way of saying I think we're going to make it. AI is going to disrupt a lot of tasks and industries, but not in two weeks and not in two months, and probably not even in two years for many jobs. For coding? Probably. Legal document review? Ideally. Email? Hopefully! But things like writing? I mean, if you want to task AI with that, then sure! But you might not want to, at least not for everything.
That's the point. We're going to figure out the right balance here. Just because AI may eventually be able to do everything doesn't mean that it should. Or that it will. Because we will choose for it not to for certain things we wish to do and/or that we get value out of. Longer-term our mindset around work will have to shift from tedious tasks that we have to do to recognizing and focusing on the work that we want to do. This sounds great! But I suspect that it's going to be no less of a hard shift to make, mentally, emotionally, and even physically. Because it will just be different. As the focal point shifts from the output to the input...







1 For the record, I would have relied less heavily on crypto and certain VR as the counter-examples as they're simply not strong enough points. Sorry Claude.
Something Big Is Always Happening
There's a blog post making the rounds this week by Matt Shumer — an AI startup founder and investor — called "Something Big Is Happening." It's essentially a letter to friends and family telling them to wake up because AI is about to change everything, like, now. It uses the COVID pandemic as its framing device. It predicts the imminent end of most white-collar jobs. And it has gone quite viral on the strength of its urgency and its "I'm just telling you what happened to me" sincerity.
I think the post is well-intentioned. I also think it's almost perfectly engineered to be wrong in the ways that matter most.
Let's start with the COVID comparison, because it tells you a lot about the rest of the piece. Shumer writes that we're in the "this seems overblown" phase of something much bigger than COVID. The implication being that the skeptics are the ones who weren't paying attention in February 2020, and that he — an AI insider — is the one stockpiling toilet paper.
The problem is that COVID worked as a shock precisely because it was fast and binary. One week you were going to restaurants, the next week restaurants didn't exist. AI is not that. AI is a technology that has been improving — sometimes rapidly, sometimes not — for over a decade. We've had approximately 47 "everything just changed" moments in the past three years alone. Some of them were real! Most of them were vibes.
COVID didn't need evangelists to write blog posts telling you it was real. It just was real, and it hit you whether you believed in it or not. If AI were truly at the "two weeks before lockdown" stage, the last thing you'd need is a 5,000-word essay begging people to pay attention. The virus didn't need a Substack.
Now, to be fair, there's a kernel of truth in the piece. AI has gotten significantly better. The models released in early February — Opus 4.6, GPT-5.3 Codex — are genuinely impressive. If you haven't used the best available models recently, you probably should. On this point, Shumer and I agree completely.
But there's a vast chasm between "these tools are really good and getting better" and "50% of entry-level white-collar jobs will be eliminated within one to five years." And Shumer leaps across that chasm with the confidence of someone who has never watched a technology adoption cycle play out in the real world.
I have. I spent years covering the tech industry as a reporter and then over a decade as a venture investor. And the single most reliable pattern I've observed is this: the people building a technology are constitutionally incapable of accurately predicting how fast society will absorb it. They always, always think it will be faster than it is. Not because they're lying, but because they're extrapolating from their own experience — and their own experience is not representative of anything.
Shumer's big revelation is that he can now describe an app to AI and have it built without much intervention. I believe him! That's genuinely cool. But the leap from "AI can write code for an AI startup founder who has been using these tools for six years" to "AI will replace your lawyer, your doctor, and your accountant within a couple of years" is... well, it's a leap. It's the kind of leap you make when you've been too deep inside the bubble for too long.
Let me address the specific claims, because they deserve scrutiny.
Shumer cites Dario Amodei's prediction that AI will eliminate 50% of entry-level white-collar jobs within one to five years, and then says "many people in the industry think he's being conservative." He presents this as though it were a sober assessment from a credible authority. And Amodei is credible — probably the most thoughtful CEO in AI. But it's also worth noting that Amodei runs a company whose valuation is directly tied to the belief that AI will become extraordinarily powerful extraordinarily quickly. Every AI CEO in the world has an incentive to hype the timeline. That doesn't make them wrong. But it does mean you should apply a discount rate to their predictions, the same way you would to any CEO talking about the future of their own industry.
The METR benchmarks get cited — AI completing tasks that would take a human expert "nearly five hours," with the number doubling every seven months. This sounds terrifying until you think about what "tasks" means in the context of a benchmark. Benchmarks measure what benchmarks measure. They're useful indicators of progress, but the history of AI is littered with benchmarks that were "solved" long before the real-world equivalent of the benchmark was anywhere close to solved. Passing the bar exam on a multiple-choice test is not the same thing as practicing law. Completing a coding task end-to-end in a controlled environment is not the same thing as shipping production software at a Fortune 500 company with legacy systems, compliance requirements, and a VP who keeps changing the spec.
Then there's the "AI helped build itself" moment, which Shumer presents as the most important and least understood development. OpenAI said GPT-5.3 Codex was "instrumental in creating itself" — used to debug training, manage deployment, and diagnose evaluations. This is interesting! It is also... not the singularity. Software has been used to build software for as long as software has existed. Compilers compile themselves. The question isn't whether AI can contribute to AI development — of course it can — but whether this creates the runaway recursive loop that Shumer implies. And on that, the evidence is far from clear. The gains from AI-assisted AI development could plateau. They could be large but linear. The "intelligence explosion" framing assumes a specific exponential dynamic that is not guaranteed by the underlying mechanics.
The most revealing part of the piece, honestly, is the advice section. Because the advice is... fine? "Start using AI seriously." Yes. "Lean into what's hardest to replace." Sure. "Get your financial house in order." Always good advice, AI or no AI. "Spend an hour a day experimenting." Not bad!
But here's what's funny about this: if AI were truly about to do to white-collar work what COVID did to in-person dining — which is the explicit comparison Shumer makes — then "spend an hour a day experimenting with AI" would be hilariously inadequate advice. You don't tell someone to spend an hour a day experimenting with pandemic preparedness when the pandemic is two weeks away. The modesty of the advice quietly undermines the extremity of the prediction.
What Shumer is actually describing, if you strip away the apocalyptic framing, is a technology that is very useful, improving quickly, and that will probably change a lot of jobs over the next five to ten years. Which is... correct. And also not a novel observation. And also not COVID. It's closer to the internet, which did in fact transform virtually every industry — but over the course of decades, not months, and in ways that were far more nuanced and surprising than anyone predicted in 1995.
I should be clear about something: I am not an AI skeptic. I use these tools daily. I've tracked the capital expenditure numbers — the hundreds of billions that Amazon, Google, Microsoft, and Meta are committing to AI infrastructure. I take this seriously. I've invested in companies in this space. The technology is real and it is going to matter enormously.
But I've also been around long enough to recognize the genre of what Shumer wrote. It's the "I need you to understand what I understand" post. The "I've been holding back but now I owe you the truth" post. It's a genre that recurs with every major technology wave, and it always has the same structure: personal revelation, exponential trend extrapolation, dire warning, call to action. I've seen this post written about the internet, about mobile, about crypto, about the metaverse. Sometimes the underlying technology really is transformative. It just never transforms on the timeline or in the way that the "something big is happening" post predicts.
The people who wrote the equivalent post in 1995 about the internet were right about the big picture and wrong about almost every specific prediction. The people who wrote it about crypto were... mostly just wrong. The question with AI isn't whether it matters — it obviously does — but whether the specific doom-and-urgency framing is warranted by the current evidence.
I don't think it is. Not yet, anyway. And I say that as someone who has felt the ground shake — perhaps a bit earlier than most, given where I sit. But earthquakes and volcanos are different things. The ground shaking doesn't necessarily mean everything is about to be buried in lava.
What I'd actually tell the friends and family that Shumer is addressing: use these tools. They're remarkable. Pay the $20 a month. Experiment. Be curious, not panicked. And when someone who runs an AI startup tells you that AI is about to change everything in the next one to five years, remember that someone who ran a crypto exchange told you the same thing in 2021, and someone who ran a VR company told you the same thing in 2016. The technology may be different this time. The incentive structure for the predictions is exactly the same.





