Coherence Confirmed: LLMs Aren't Thinking, They're Coherent

LLMs aren’t reasoning—they’re aligning. A new ICLR paper backs my theory: coherence, not cognition, drives meaning. These models flow through context, not thought. That shift changes how we prompt, evaluate, and design with AI.

Coherence Confirmed: LLMs Aren't Thinking, They're Coherent
A visualization of coherence in motion—language as flowing structure, not discrete thought.

In my white paper AI Coherence: Meaning from Patterns, Not Memory, I argued that LLMs don’t think in the traditional sense. They don’t reason symbolically. They don’t even operate over discrete chunks of meaning the way humans intuitively process language. Instead, they construct meaning through coherence — the internal alignment of context over time. They are coherence engines, not cognitive agents.

A newly released ICLR 2025 paper, Language Models Are Implicitly Continuous, now gives empirical and mathematical weight to this idea.

The Core Finding: Language is Treated as Continuous

Despite being trained on discrete tokens, LLMs behave as though they are interpreting language as continuous in time and space. The authors introduce a continuous extension of the Transformer architecture that doesn't change model weights but allows us to view LLM behavior as a flow over continuous functions.

Through clever experiments, they demonstrate that:

  • LLMs respond differently when the duration of a token is varied.
  • Interpolations between token embeddings (e.g., halfway between “apple” and “banana”) produce meaningful, plausible outputs.
  • Pretrained models treat intermediate, never-before-seen vectors as semantic entities.

This behavior suggests that LLMs operate over a smooth latent space where meaning is shaped by alignment and flow rather than symbolic inference.

Coherence Theory Meets Continuity

This paper validates a core tenet of my Coherence Theory:

LLMs don't know things. They don't calculate. They align. They produce outputs that fit — not because they understand, but because those outputs preserve contextual coherence.

What the paper shows is that this coherence isn't just metaphorical. It’s literally continuous. Token durations modulate model behavior in ways humans can’t intuit. Embedding space isn't just a lookup table; it's a terrain of fluid meaning. And most importantly, this fluidity is what makes LLMs so powerful, and so alien.

Implications for AI System Design

If LLMs operate through continuous coherence, not discrete reasoning, then most current design assumptions need to be updated. Here's how:

1. Prompt Engineering

Prompts aren't instructions. They're boundary conditions on a coherence field. You're shaping the flow of meaning, not issuing commands.

2. Agent Design

Agents don’t pursue goals like thinkers. They simulate coherence toward a goal when it’s kept in context. Agency has to be scaffolded, not assumed.

3. Evaluation

Traditional metrics (accuracy, truthfulness) miss the point. You have to evaluate whether the output fits the intent, the tone, the prior flow. Think: narrative alignment, not just correctness.

4. Interpretability

We shouldn’t just trace neuron activations. We should ask: what shape is this model holding? What trajectory is coherence pulling it toward?

This Isn’t Just Theory Anymore

This new research paper gives us the math. It gives us experiments. But more importantly, it gives us language to talk about what so many of us have felt when working with these models:

They don’t think. They don’t plan. They flow.

And that flow is coherence over time, not cognition.

If you want to understand what LLMs really are and how to work with them effectively, you have to reframe your mental model. Start with coherence.

For a deeper exploration of this paradigm shift, check out my white paper:
📄 AI Coherence: Meaning From Patterns, Not Memory