LLMs Featured Are We Just Language Models in Meatspace? Maybe you’re not so different from an LLM. You just hallucinate with more confidence. This piece unpacks the “coherence trap,” why AI feels smarter than it is—and why that might reveal something uncomfortable about us.
Coherence Featured The Coherence Trap: Why LLMs Feel Smart (But Aren't Thinking) LLMs don’t think—they resonate. In this article, I unpack The Coherence Trap' the illusion of intelligence created by structured, fluent language. We explore how meaning emerges from alignment, not understanding, and why designing for coherence—not cognition—is the key to building with AI.
Coherence Featured Coherence Confirmed: LLMs Aren't Thinking, They're Coherent LLMs aren’t reasoning—they’re aligning. A new ICLR paper backs my theory: coherence, not cognition, drives meaning. These models flow through context, not thought. That shift changes how we prompt, evaluate, and design with AI.