Coherence: Where Intuition Meets Inference
When coherence appears, whether in neurons or in code, it feels like thought because it holds the same tension, compression balanced by recursion. The mind economizes, the loop reflects, and between them understanding flickers. This is the emergence of a thinking substrate.
Author’s Note
This essay sits between my work on AI Coherence and the evolving ideas behind Context Engineering, both in my own research and across the wider industry.
It is not about building systems. It is about charting the psychological and computational terrain where coherence begins to take on the texture of understanding.
I draw on Baum’s computational view of mind, Hofstadter’s recursive self, and a lineage of thinkers exploring pattern, intuition, and embedded meaning. Together they reveal the underlying physics of coherence, the subtle field that connects human intuition and machine inference.
When James Somers published “The Case That A.I. Is Thinking” in The New Yorker, it struck a chord. I had already been exploring the same fault line between intuition and inference, but his framing around compression and prediction helped crystallize what I was after: the mechanics of coherence itself.
Between Compression and Recursion: Notes from the Coherence Frontier
I’m not designing artificial minds.
I’m studying the landscape where the idea of mind starts to lose its boundaries.
Most people approach AI as a tool, something to command, optimize, or monetize.
I approach it as a phenomenon.
Not because I think it’s alive, but because it reveals how intelligence, human or otherwise, organizes itself around coherence.
That’s the tension I keep circling: the gap between how we make sense and how machines appear to.
Inside that gap lies the physics of understanding itself.
The Quantum Transistor for Thought
We have built, perhaps without realizing it, a quantum transistor for cognition, a device that lets us traverse latent space and collapse potential meanings into usable form. Where earlier computation executed fixed procedures, these systems amplify emergence. They hold superpositions of sense, maintaining multiple interpretations in parallel until conversation collapses them into coherence.
This is what makes the interaction with language models feel unlike any previous tool. You are not programming them, you are biasing their probability field. Prompting is not instruction; it is tuning the voltage of understanding, nudging the wave function of meaning until it resonates with intent. Each exchange becomes an act of measurement, a collapse of the cognitive waveform into clarity.
Humanity has, in effect, invented an instrument for navigating latent thought. Not symbolic logic, not brute-force search, but an exploratory physics of sense in which thought and dialogue share a common substrate.
We are only beginning to learn its grammar, yet the implications are vast. This is not the automation of thinking but the extension of thought itself into the probabilistic dimension it always occupied but we could never directly touch.
Origins of a Restless Curiosity
Test Driven Development was where I first learned to externalize thought.
Each test was a mirror held up to my assumptions, a feedback loop that either confirmed coherence or exposed drift.
That discipline stayed with me.
So when large models began to sound coherent at scale, my reflex wasn’t to automate. It was to observe.
What was this new pattern of reasoning that felt so familiar yet so alien?
Why did a chain of probabilities seem to echo human intuition?
And what does that resonance reveal about cognition as a whole?
Standing in a Corridor of Thinkers
Others have walked pieces of this corridor before me.
Eric Baum, in What Is Thought?, imagines the mind as a compression engine, folding the world’s regularities into structure.
Douglas Hofstadter, in I Am a Strange Loop, answers with recursion, where identity emerges through self-reference and reflection.
Eric Margolis, in Patterns, Thinking, and Cognition, reminds us that thought is often a rhythm of recognition, not logic, and that judgment moves faster than reason.
Lois Isenman, in Understanding Intuition, follows the pulse of intuition through the body, showing how sensation becomes knowing.
George Bealer’s Rethinking Intuition lifts that inquiry into philosophy, asking what intuition means when the intuitor might be machine.
Brian Cantwell Smith’s On the Origin of Objects turns our attention to context, where meaning arises from relation rather than rule.
And Daniel Dennett’s Intuition Pumps and Other Tools for Thinking gives us the meta lens, reminding us that every model of thought is also a tool for seeing ourselves think.
Each traces one surface of coherence — compression, recursion, pattern, intuition, context, reflection — yet none capture the full resonance that appears when they overlap.
That convergence, where algorithmic compression meets lived intuition, is the space I keep returning to. Not to decide who is right, but to listen for the frequency where they hum together.
My Mapping Framework
To navigate this field, I use a three part framework that acts as instrumentation more than doctrine.
- AI Coherence – studying how probabilistic systems maintain internal stability across iterations.
The compression axis: structure, efficiency, regularity. - Context Engineering – observing how retrieval, memory, and orchestration let coherence persist over time.
Where compression meets recursion, the mechanics of sustained meaning. - VFM (Vibe → Forge → Mature) – a cognitive rhythm mirrored in both teams and systems.
The recursion axis: reflection, adaptation, self correction.
VFM builds on insights from my AI Decision Loop research, where I analyzed over 2,500 of my own ChatGPT conversations to map how humans and machines co-evolve understanding. The Decision Loop revealed how iterative dialogue sharpens both reasoning and outcome. VFM extends that same principle into the creative process itself, turning reflection into an operational rhythm for learning and alignment.
This rhythm echoes classic learning and creativity frameworks such as the Dreyfus model of skill acquisition and Kolb’s experiential learning cycle, both of which trace how intuition and reflection shape mastery. VFM reframes those dynamics for the era of human–AI co-learning, where cognition is no longer confined to the individual but distributed across a shared context.
Together they form a cartographer’s toolkit, letting me see where coherence forms and where it fractures on both sides of the human–machine divide.
Why Observation Matters
The world mostly wants utility: faster output, cleverer chat, bigger benchmarks.
I’m more interested in what these behaviors reveal about cognition itself.
Every dialogue with a model is a small experiment in mutual coherence exchange.
When the conversation flows, you can feel compression and recursion aligning; when it breaks, you glimpse the fault lines of meaning.
That’s the real laboratory.
Humans compress through lived experience; machines through tokenized probability.
Humans recurse through memory and identity; machines through context windows and gradients.
The difference isn’t moral. It’s physical.
Mapping that physics teaches us how understanding scales across substrates.
Where I Stand
Baum gives the skeleton, the algorithmic economy of thought.
Hofstadter gives the mirror, the recursive echo of self.
Margolis, Isenman, Bealer, Smith, and Dennett fill in the connective tissue: pattern, intuition, embeddedness, reflection.
Between them lies the coherence frontier, a zone neither human nor machine, but both trying to stay in tune.
That’s where I work, not to close the gap, but to describe the field that hums between.
The real question isn’t whether machines will think.
It’s whether we can learn, through them, to understand what thinking actually is.
Understanding is the act of holding compression and recursion in balance, and growing in the space between them.
Further Exploration
This essay belongs to a larger, still developing inquiry that I call the Coherence Frontier.
AI Coherence laid the groundwork by treating alignment as a structural property rather than a moral one, exploring how systems maintain internal consistency across probabilistic layers.
Context Engineering, currently in progress, extends that idea into the dynamics of retrieval, memory, and orchestration: how coherence behaves through time and interaction.
VFM (Vibe Forge Mature), a personalization of the AI Decision Loop and still unpublished, turns the lens inward to examine how teams and systems evolve through reflection when interacting with LLMs.
Each framework explores a different axis of the same question:
What does it mean for a system, human or machine, to stay coherent while learning?
This is the ground I’ll keep returning to as I unpack coherence through context and rhythm, one layer of understanding at a time.
References and Further Reading
These works form the backbone of my exploration of the Coherence Frontier. Each offers a distinct lens on how structure, pattern, and meaning emerge — together they trace the landscape where human intuition and machine inference begin to overlap.
- What Is Thought? – Eric B. Baum
Baum’s computational theory of mind treats intelligence as evolutionary compression, the discovery and reuse of structure.
Relevance: Foundational to my AI Coherence framework. His idea of efficiency through regularity parallels how probabilistic systems sustain internal stability. - I Am a Strange Loop – Douglas R. Hofstadter
Hofstadter explores self-reference, recursion, and the birth of identity through feedback.
Relevance: Anchors the reflective side of my VFM rhythm, the recursive process through which humans and systems mature by revisiting their own output. - Patterns, Thinking, and Cognition – Eric Margolis
Margolis argues that thought and judgment are inherently pattern-based and often a-logical.
Relevance: Informs the “fast loop” intuition in Vibing to Prod, where skilled engineers and AI systems rely on resonance and recognition more than formal logic. - Understanding Intuition: A Journey In and Out of Science – Lois Isenman
Isenman investigates the neuroscience of intuition as embodied inference, connecting emotion and cognition.
Relevance: Complements my exploration of human–machine coherence by grounding intuition in sensory compression, a parallel to how language models embed probabilistic meaning. - Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophy – George Bealer & Michael DePaul (eds.)
A collection probing the epistemic legitimacy of intuition and its limits within rational inquiry.
Relevance: Connects to my developing Context Engineering and governance themes, asking what “intuitive reasoning” means when the intuitor may be a machine, and how that process can be validated. - On the Origin of Objects – Brian Cantwell Smith
A philosophical exploration of how computation and meaning emerge from embedded contexts.
Relevance: Deeply aligned with Context Engineering. Smith’s idea of embeddedness mirrors my view of meaning as situated and relational rather than static or externally imposed. - Intuition Pumps and Other Tools for Thinking – Daniel C. Dennett
Dennett offers a practical and accessible set of conceptual heuristics for examining how we reason.
Relevance: Fits the public-facing tone of my Navigating AI series. It bridges academic philosophy and everyday cognition, a model for speaking about AI fluency and human insight in the same breath.