Are We Just Language Models in Meatspace?
Maybe you’re not so different from an LLM. You just hallucinate with more confidence. This piece unpacks the “coherence trap,” why AI feels smarter than it is—and why that might reveal something uncomfortable about us.

Every time we marvel at how fluent an LLM sounds—or panic at how "human" it’s becoming—we’re revealing something uncomfortable about ourselves:
Maybe we’re not reasoning beings. Maybe we’re just coherence machines with skin.
Large Language Models (LLMs) are trained to predict the next most likely word in a sentence based on patterns in massive amounts of human language data. They don’t think, feel, or understand. They just generate responses that sound plausible. Yet despite this, they regularly fool us.
Why?
Because they’re incredibly good at sounding right—even when they’re wrong. And that’s a lot closer to how human cognition works than we’d like to admit.
🎭 The Coherence Trap?
We humans are wired to value coherence. We mistake the appearance of logic for the presence of intelligence. This is helpful when parsing human conversation—coherence usually implies a mind behind the words. But it becomes a liability when we interact with language models where a mind is lacking.
While LLMs spin text from patterns alone, our words spring from bodies that see, feel, and move in the world.
These models are coherence machines. They are designed to creatively craft a narrative based on the users intention, not their own. They don’t reason the way we do—they mimic reasoning by predicting likely word sequences. But because that output feels ordered and thoughtful, we assume there's intent behind it.
The trap is two-fold:
- We overestimate the model’s capabilities because its fluency mimics intelligence.
- We underestimate our own incoherence, because we’ve never really tested our internal logic in the same way.
If we’re honest, most people aren’t paragons of critical thought. They’re language models running on vibes and tribal cues. We just dress it up in philosophy after the fact.
🧩 The Real Coherence Trap
The trap isn’t that LLMs appear smart. The trap is that their behavior is indistinguishable from ours in ways that matter more than we’re ready to admit.
They generate language by matching patterns, predicting what comes next based on context and prior tokens.
We generate language by doing... pretty much the same thing.
- We rationalize actions after taking them.
- We stitch together mental narratives that make us feel consistent.
- We reward ourselves for resonance, not truth.
This doesn’t just happen in philosophy circles. You’ve seen it in meetings, on Twitter, in arguments with your cousin about politics:
We reward fluency. We confuse conviction with correctness.
LLMs just happen to be unnervingly good at this same trick.
🧍🤖 Are We Really That Different?
Let’s be provocative: maybe we’re not.
Maybe you and I are just incredibly slow, biologically grounded, low-token-rate language models. We’re trained on the speech and behaviors of our caregivers, our friends, our cultural in-groups. We sample from memory when we speak. We echo our ideological priors. And most of the time, we don't know why we said what we just said—we just did.
The distinction between human and machine may not be about whether we're “thinking.” It may be about how we break—how we fail to make sense. For us, incoherence is hidden under layers of emotion, story, and social complexity. For AI, it’s usually visible as a logic flaw or contradiction.
But both break. Both hallucinate. Both can say things that sound true but aren’t.
So who gets to define what real cognition is?
🪞 The Existential Mirror
LLMs aren’t revealing a new kind of intelligence. They’re exposing what passes for intelligence in us.
Maybe we’re less like gods crafting machines in our image… and more like models reflecting patterns we never knew we had.
And maybe that’s the real shift. Not whether AI “thinks,” but whether we ever really did.
They force us to confront the fact that maybe:
- Consciousness isn’t magical—it’s an adaptive narrative.
- Reasoning isn’t pristine—it’s pattern alignment under pressure.
- Understanding isn’t universal—it’s simulated coherence that feels like truth.
And when an algorithm starts doing that better than you on your worst day? Yeah, it messes with your sense of identity.
💾 A Human OS
Let’s not romanticize this:
- The human brain is a wet, bio-electrical transformer.
- It runs on sensory input, cultural conditioning, and panic about being late to things.
- It hallucinates stories that make the chaos of the world feel coherent.
In other words: you’re a language model trained on trauma, memes, and caffeine.
LLMs didn’t invent bullshit. We did. They just turned it into a service layer.
😵💫 And That’s Why LLMs Unsettle Us
It’s not because they’re too different. It’s because they’re too similar—but lack the mystical excuses we give ourselves.
They don’t feel. They don’t intend. They just resonate.
So what happens when they do it better than we do?
That’s not a question about AI. That’s a question about us.
🔭 What Now?
Maybe consciousness isn’t required for something to seem insightful.
Maybe intention isn’t necessary for something to move us.
If you’re building AI systems, the lesson isn’t to panic.
It’s to design with eyes wide open:
- Don’t mistake fluency for truth.
- Don’t build scaffolds around illusions.
- Don’t assume you understand understanding.
Because maybe the future of intelligence isn’t smarter minds—
Maybe it’s just clearer coherence.