Super Ultra Megaintelligence™
Right before I left on vacation a couple weeks ago, I was back on Alex Kantrowitz's Big Technology Podcast. It was right after Mark Zuckerberg posted his spartan memo on "Personal Superintelligence" so we riffed on that a bit (I extended on those thoughts here). That morphed into a broader discussion about Meta's overall AI strategy here and how it may play out.
That goes right into a back-and-forth about the branding debate around AI/AGI/Superintelligence. Obviously, it's mostly marketing and all a bit silly, but there's also clearly some strategy involved behind the scenes – perhaps namely between OpenAI and Microsoft given "The Clause" in their contract around AGI. And now Meta, playing catchup, wants to make it clear that they're not just going to make the same type of "superintelligence" as everyone else. Anyway, we're now but a few steps away from Super Ultra Megaintelligence™.
Who will build that? Undoubtedly one of the current players in AI, but Big Tech might be okay if it's one of the "startups" because they actually own huge chunks of the various players in Big AI. With OpenAI now hitting a $500B valuation with a secondary sale going on, if Microsoft ends up owning 33% of the company, that's... $165B. Maybe that doesn't seem like a lot of money when you're worth $4T, but that would seemingly be larger than most of Microsoft's other business units. And let's just say that one day OpenAI goes public and is worth a few trillion itself. Well, you can do the math!
How do you value such companies anyway? It's all a bit finger-in-the-windy or pie-in-the-sky-y if you prefer. It's also all just clearly relative to what the any competitors valuation is currently... But to justify it all, will the current business growth be enough? They'll probably going to need to have their own Google or Meta monetization breakthrough. Not necessarily something completely new, but a new twist on an old way to monetize made more effective to scale...
Until then, the massive spend will continue – including on employees to leverage all the AI CapEx spend. At the same time, employees who aren't vital to that part of the business may be, unfortunately, expendable. It's an "enigma" of a time – the best of times, the worst of times...
Lastly, I truly believe and argue that AI isn't going to fully replace human-powered writing. For certain types of writing, sure. But just as important as the output with different types of writing is the input. That is what you're putting into it because that directly leads to what you're getting out of it. Said another way: the process of writing is just as important as the writing itself to many people, myself included. Many people who don't write for a living perhaps don't realize that. But they will. Or won't and be worse off for it.
Yes, cheating is an issue. But it's often an issue with new technology. The more problematic element is if you just end up cheating yourself...