ChatGPT Will Test Apple's Fences
After the WWDC keynote, Casey Newton got a chance to see demos of some of the Apple Intelligence features – many of which won't be coming to a device near you anytime soon. A few interesting tidbits from the (paid) Platformer newsletter today:
You can use Siri to ask ChatGPT questions directly. In the keynote, I noticed some apparent friction in using Siri’s new ChatGPT integration: for privacy reasons, Siri asks for permission before it sends your query to ChatGPT.
But there’s a workaround: if you know you want ChatGPT’s opinion, you can just tell Siri to ask ChatGPT and it will do so without double-checking with you first.
That's good news as I imagine it will get quite annoying when you need to opt-in to sending any information to OpenAI every single time you wish to use ChatGPT. But it's a delicate balance Apple has to strike here with their overall privacy/security message – an every-time-opt-in makes that easier – to everyone not named Elon Musk, apparently – and a direct call to use ChatGPT seems like an explicit enough opt-in to get around a pop-up.
Also on the topic of ChatGPT, Newton says that Apple doesn't plan to rate-limit usage through Siri, which happens now on the free version of the service. Though you have to wonder if this functionality is used to an extreme amount if OpenAI won't try to rate limit it on their end – but obviously Apple must have thought of that as it would lead to a shitty experience for the end user. So presumably OpenAI is just willing to take this risk. Though how happy will Microsoft be about that, since this is all still presumably running on their servers?
Happy enough if they're getting paid, I guess (though there are other tensions with helping Apple in AI here), but that just leads to the question of who is paying whom here? The current consensus seems to be that there may be no money changing hands between Apple and OpenAI. Though perhaps it's possible that if the usage hits a certain threshold that Apple will have to pay some amount to keep the pipes open, as it were.
Apple is about to become a content moderator at scale. Until now, Apple has mostly been able to avoid hot-button debates over what content it will and won’t allow on its platform. That may be about to change: the company is about to roll out text-to-image generators to some healthy percentage of its 1 billion users, and the company should expect to begin hearing lots of opinions about what images and text it will and won’t allow users to create.
Like any company making social products, Apple executives clearly hope that these features will primarily be used to surprise and delight. But the trolls will surely be ready to test the limits.
This issue felt like it was clearly why Apple was limiting image creation to just three "types". That feels almost comically restrictive compared to the current state of the art out there in generation AI, but it also stops users from creating realistic-looking images, which are a landmine always waiting to explode. Still, as Newton notes, there are about a billion shades of gray even just within cartoon-y image creation. And Apple is going to both have to make calls and is inevitably going to get in trouble – even if just in the press – for something created with their tools.
I’m also curious what refusals we might see in the Genmoji feature, which lets users create custom emoji. What exactly will Apple let users do with the gun emoji, which it transformed from a pistol into a water gun in 2016 to make it less threatening? What about the eggplant emoji?
This feature alone is going to lead people to test the limits like velociraptors attacking the electrified fences in Jurassic Park. 😈