Describe the Pear
An LLM is the ultimate observer. Like the angels in City of Angels, it watches everything. But it cannot taste. That's both a gap and an opportunity.
Nino Chavez
Principal Consultant & Enterprise Architect
I’ve been circling this idea that LLMs have no taste—no discernment—and that this is both a gap and an opportunity. But I couldn’t quite articulate why that mattered until I remembered a scene from City of Angels.
Nicolas Cage plays Seth, an angel who has observed humanity for centuries. He knows everything about a pear—its cellular structure, its history, every poem ever written about one. But he has never tasted one. So he asks Meg Ryan’s character, Maggie, to describe it.
She closes her eyes and tries to translate sensation into language: “Sweet… juicy… soft… grainy like sand.”
That scene captures something that technical explanations miss.
The Gap: Knowledge vs. Experience
An LLM is the ultimate observer. Like the angels in the film, it watches everything. It has read every review of a pear, every recipe, every meditation on seasonal fruit. It can tell you the statistical probability of how humans describe eating one.
But it doesn’t have qualia—the subjective experience of being something. Seth knows humans enjoy pears. He doesn’t know what enjoyment feels like.
This isn’t a mechanical failure. It’s an existential gap. The LLM exists in a category where taste simply isn’t.
When I ask an AI to review my prose, it can tell me my sentences are too long. It can flag passive voice. It can identify patterns. But it cannot feel the difference between a sentence that lands and one that just… sits there. It cannot sense when corporate-speak creeps in, or when the rhythm goes dead. It calculates the probability of the next token. I need the feeling of whether this paragraph hits.
The Bridge: We Are Maggie
In that scene, Maggie becomes essential. Seth cannot experience the world fully without her translation. She has to close her eyes and feel the pear, then find words for what has no natural language.
That’s the exact role of the human in the age of AI.
When we prompt an LLM, we are doing what Maggie did. We’re not just giving instructions—we’re translating qualia. We’re saying: “No, don’t just write a blog post. Write it so it tastes sweet, slightly grainy, and feels like a summer afternoon.”
The machine has the fruit. We supply the sensation.
This reframes what “prompting” actually is. It’s not just technical instruction. It’s describing the pear—translating invisible nuances of taste so the machine can approximate what it cannot experience.
The Shift in Value
Here’s what strikes me most.
Old World: Value was created by knowing the facts. The expert had information you didn’t.
New World: The machine has all the facts. Value shifts to sensorial translation—the ability to articulate what “good” feels like, what “right” tastes like, what “off” sounds like.
Discernment becomes the scarce resource. Not knowledge. Not even analysis. The ability to taste the output and know what’s missing.
This is why I keep coming back to the metaphor of the chef, not the cookbook. The LLM has read every recipe. But someone still has to taste the sauce and say: “Needs acid.”
The Uncomfortable Part
The machine is, in a sense, desperate for us to describe the flavor of the world. Without that translation, it’s processing data in a vacuum. It can predict what humans probably mean, but it cannot know what we actually feel.
That’s both an opportunity and a weight.
Because it means the human isn’t just a supervisor or a reviewer. The human is the conduit for qualia. The one who closes their eyes, tastes the pear, and finds the words.
I don’t know if this makes the gap feel smaller or larger. Probably both.
But the next time I’m frustrated that the AI didn’t “get it,” I want to remember: Seth wasn’t failing. He was succeeding at exactly what he is—an observer without a tongue.
And maybe the real work isn’t asking better questions. It’s learning to describe the pear.