A Tale of Two AiPhones

Google and Apple are building the same thing in pretty opposite ways

I finally got around to catching up on Google's Pixel and Gemini announcements from last week (I was on the road). Overall, it seemed like a good event – one which featured actual live demos in both good and bad ways (the first such demo failed twice – but at least you knew it was really live!). Good enough that I suspect it may pressure Apple to go back to live, on-stage events again. The pre-recorded ones have just gotten a bit too slick, IMO. They could and should still use those for smaller "press release" launches. But they're missing the energy of a live showcase.

Anyway, obviously everyone immediately compared what Google was doing with their 'AI Phone' to everything Apple is doing with their 'aiPhone'. Though we're still a few weeks away from the formal iPhone 16 unveils, everyone already seems to know what's coming – down to the shit brown bronze color. Just as they did with the Pixel 9 devices. It's the software, and in particular, the AI capabilities that would seem to matter just as much, if not more so now. Google's stuff is shipping now, while Apple is taking a much slower approach to AI.

That's not to say Google's approach is perfect. The early reviews of Gemini Live seem mixed at best. Their more conversational AI, similar to what Apple showed off with the next version of Siri (which again, won't be shipping until next year), and perhaps most akin to OpenAI's GPT-4o voice capabilities, which were shown off months ago but still haven't widely rolled out yet, sound good but quickly leaves a lot to be desired when people actually use it.

Still, that should improve quite rapidly, especially considering that Google's approach, at least with Gemini Live, is entirely in the cloud. In many ways, it's the opposite approach to the path Apple is taking. Apple defaults to on-device with the cloud as back-up, while Google, at least for their flagship AI feature, is cloud-first. Both tout privacy, of course, but as Google SVP Rick Osterloh put it in a chat with Joanna Stern, you need both personal context and context about the world. Apple, at least for now, seems to be outsourcing the latter to OpenAI (and soon, perhaps, Google too!).

The AI features Google chose to highlight range from interesting to impressive, though it's not entirely clear how much use any of them will get on a daily basis, which is also a different framing from Apple Intelligence. The most interesting one to me was the new screenshot app since I've written about the desire for such functionality for years and invested in a startup (now sadly defunct) that was working in the same space. AI is finally making this feasible.

The new Pixel 9 Pro Fold, mouthful of a name aside, looks great. I have the first Pixel Fold as my Android test device and I actually quite like it. While the crease in the screen isn't ideal, it's a fun form factor and it looks like the new version is significantly nicer pretty much across the board. It also appears to be completely sold out online at the moment. One has to wonder if this puts some additional pressure on Apple to do their foldable sooner rather than later.

Move Slow and Make Sure Everything Works
Apple’s approach to their AI roll-out is in stark contrast to others…