M.G. Siegler •

Tasty AI

Is AI "taste" a quixotic task?
Tasty AI
"The only problem with Microsoft is that they just have no taste."

This Steve Jobs quote from a 1995 interview – notably before he returned to Apple – has long lingered in the back of my mind. To me, it's more than just a succinct evisceration of his rival, it speaks to a problem with a lot of technology. It's the notion that, with products in particular, there's an art alongside the science. And it sure feels like this will be a crucial component in AI.

It feels like we've been seeing this come up more and more as AI starts to permeate everything, but especially the aspects bordering on everyday life. As the technology continues to become more capable, the limitations shift from what it can do, to how it does such things. And why it makes the choices that it does. Famously, no one really knows on a granular level – not even those creating the technology. It's all simply too complex, with too many inputs, and increasingly is teaching itself. We know the high-level ways in which it works, at least with LLMs, but we're also starting to branch beyond that. Which, of course, many believe will lead to AGI.

But before we get there, it does seem like we may need to reconcile this notion of taste. I've previously written about the fear that our current AI may be incapable of truly original thought, and that anything we're seeing that may appear as such is really just algorithmic anomalies that we don't understand. The output may look the same, but it's not the same. Because it's the input that may matter most. To truly come up with new discoveries and epiphanies – to create actual iconoclastic thinking – we may be missing some ingredients as everything sort of gravitates towards an ultimate mean.

Our current AI may be able to find any needle in any haystack, but can't write Don Quixote.1 Well, it can now, but couldn't in 1605, had such technology existed then. But actually given the knowledge of every Spanish word and being able to run infinite combinations of those words, AI would technically write Don Quixote in one scenario. Cervantes may have been more efficient in doing so, but it's just a matter of enough compute and resources to mimic the path his mind charted.

Still, someone – or something – would be required to distill those infinite versions of Don Quixote into the definitive one. And how would it choose? Certainly a human could do this, but could AI? If forced to, it would undoubtedly make such a decision, but again, how and why? It would go back into the endless algorithms and data sets. But a human would just go with a "gut instinct" – how one version made them feel versus the others.

This is taste.

Steve Jobs, of course, meant "good taste" with his comment. But that's obviously subjective. Even Microsoft's "bad taste" or "poor taste" or technically even "no taste" is still taste. And while he levied the charge on the entire company, it was a series of decisions by individuals that led to that taste for which he had distaste.

I think you could argue that one key ingredient for taste are limitations. That is, while AI can run infinite permutations and sort through the infinite results, human beings cannot. At some point, we have to choose based on what is feasible. That includes the way we come up with words. In a way, I suppose it's similar to what AI does – we go to what we've learned before and try to string things together in a coherent manner – but again, we cannot do it to infinity. A machine technically could (given enough compute and time) and so it uses the strings found in data sets to make decisions based on probabilities.

Said another way: the data created by humans that the machines have ingested is the proxy for "taste". It's the way those decisions are made at the highest level.

And as synthetic data has entered the equation from the outputs of those decisions, the AI can output new variables that are only reliant upon humanity once-removed. As you go further afield, earlier iterations would collapse upon themselves, perhaps because they lacked the guardrails of humanity's decisions – or taste. Maybe it's not too dissimilar to how the end of the Game of Thrones television series devolved into sloppiness because it had no guardrails in the form of George R. R. Martin's books. But I digress...

With the wave of agentic AI currently sweeping into our lives, taste is shifting from a nice-to-have to a must-have for many. Personally, even if I believed that AI is now capable enough to do certain tasks for me, I still mostly wouldn't trust it to do so. This goes for things as simple as organizing folders on my computer to far more sensitive matters such as taking over control of my email inbox. It has become less about technical abilities and more about decision-making. And yes, in a way, taste.

Programmers seemingly also had a similar issue early on, but at least some now believe that tools such as Codex and Claude Code are good enough on this front. Matt Shumer even specifically called out this notion of "taste" as his "aha" moment in his recent viral post "Something Big is Happening":

But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn't just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.

And later on, he reiterates:

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one. A year ago that would have been unthinkable. My rule of thumb at this point is: if a model shows even a hint of a capability today, the next generation will be genuinely good at it. These things improve exponentially, not linearly.

Meanwhile, the very same word came up again in this past week's viral post from Alap Shah and Citrini Research, "The 2028 Global Intelligence Crisis", noting that in the future, humans may still be needed as a sort of last line of defense around AI outputs "directing for taste".

So AI "taste" may be good enough for coding now, but still may not be for general AI usage. And even in the most optimistic (in terms of AI's near-future capabilities – decidedly pessimistic for humanity) take on where AI is heading, we may not get to AI "taste" anytime soon. The reality, of course, is probably somewhere in between the two.

Clearly the AI labs believe that they can solve for "taste" by amping up personalization. As AI learns from your responses, it is tailoring those to be more in line with what it believes you like. But beyond the sycophantic (and other) concerns, there's the notion that "taste" actually encompasses far more than your own preferences, but that you may seek out the preferences of others to tell you what they like for you.

It's nuanced, but different.

So how do we get AI to do that? The answer may still be personalization, but rather than it just mirroring you and your own tastes, you sort of do rounds of "dating" for lack of a better phrase to see which AI is most compatible with your own tastes. Forget the Turning test, how about a simple "taste test"?

This resonates with me because I've been doing this for a while now. Currently, I'm paying to use ChatGPT, Gemini, and Claude to see which suits my own style and preferences best. Right now, I think it's Claude, but that has morphed over time and I suspect it will again as each of these models continue to evolve.

Still, as deep as I am in all of this, I have a hard time believing that I'm going to trust one of these systems to handle harder workflows anytime soon. And it's not about the tech, it's about the taste.

Take, for example, the use case everyone in tech always turns to for demos: booking travel. I fully believe these services will be able to technically do that end-to-end soon enough, but I simply don't believe that the decisions they make will align with the ones I would make. Some of that may be logistics, but I also think these systems will get past that once they have full access to your calendar and email and the like. But that's just the start. From here we quickly enter a maze of a hundred little preferences that are altered by thousands of real-world variables. If the powers that be thought the game of Go was a good, complex task to prove out AI, wait until it gets a load of trying to book travel for a family with children.

Do I believe that AI will pick the place that will be best for me and my family? Sight unseen? Of course not. So the AI will prompt me just as Claude Cowork now does to ensure it's picking the right places and services. But it's not long before that's more work than simply doing it yourself. And so that doesn't work.

We'll check off all the technical boxes but the taste ones may yet remain. Because how do you code up "there are a bunch of options, this one looks nice, let's try it"? The AI might ask what you mean by "looks nice" since all of the images on the travel site that they've ingested through an API technically "look nice". And so you'll explain to the AI that you're not sure, it's just a "vibe" you're getting. And the AI will say it understands, because those are words it has ingested, but it will not actually understand. Maybe, perhaps, because an AI hasn't lived. And, as such, doesn't have the memories forged in the real world that subconsciously alter the way an image makes you feel.

In the past, people have obviously used travel agents to make such calls. And others use personal assistants – the actual human variety. But those are human beings with taste. And you can suss out if you trust their taste to match your own.

If AI technically has no taste... I turn back to Steve Jobs:

"They have absolutely no taste. And what that means is – I don't mean that in a small way, I mean that in a big way. In the sense that they don't think of original ideas."
👇
Previously, on Spyglass...
Love It If We Made It
AI will disrupt work. We will adapt.
It’s The Thought That Counts
The diminished state of thinking could be decimated by AI…
AI Can Reproduce Writing, But Not the Process of Writing
And that’s the most important part…
AI Needs Its Steve Jobs
Everyone seemingly wants to shoot the current AI messengers…
Artificial Iconoclasts
A foil might be needed for artificial intelligence…

1 Try using AI to get to the bottom of where the "needle in a haystack" phrase originated, I dare you! Gemini believes it's Sir Thomas More. ChatGPT is sure it was Chaucer. Claude thinks it's Cervantes. Digging around myself, I believe the answer is that More used a similar (but different) metaphor (involving a meadow), while the Don Quixote reference actually stems from the English translation of the book because no one outside of Spain would know what "To go looking for Dulcinea in El Toboso like looking for Marica in Ravenna..." would mean. But the actual phrase undoubtedly predates that in ways too – it seems unlikely one translator is that clever – that book translation just helped establish it widely. When pushed on Chaucer, ChatGPT kept right on hallucinating in ways that were both fascinating and entirely unhelpful in terms of conveying certainty! Anyway, my own taste here guides me towards using the Don Quixote reference.