🤖 AI Summary
This work addresses the poorly understood “aha” mechanism in large language model (LLM) reasoning, particularly the role of uncertainty expression in facilitating effective inference. We propose an information-theoretic framework that decouples reasoning into procedural information processing and cognitive verbalization, formally characterizing the latter as a key mechanism for achieving informational sufficiency—explicitly externalizing uncertainty to drive continuous information acquisition and downstream control. Integrating information-theoretic analysis, programmatic modeling, and quantification of verbalization, our empirical results demonstrate that strong reasoning capabilities stem not from specific surface tokens but from the external articulation of uncertainty. Cognitive verbalization effectively overcomes informational stagnation, offering a unified account of insight-like “aha” phenomena and post-training behaviors, thereby establishing a novel paradigm for designing reasoning-capable models.
📝 Abstract
LLMs often exhibit Aha moments during reasoning, such as apparent self-correction following tokens like "Wait," yet their underlying mechanisms remain unclear. We introduce an information-theoretic framework that decomposes reasoning into procedural information and epistemic verbalization - the explicit externalization of uncertainty that supports downstream control actions. We show that purely procedural reasoning can become informationally stagnant, whereas epistemic verbalization enables continued information acquisition and is critical for achieving information sufficiency. Empirical results demonstrate that strong reasoning performance is driven by uncertainty externalization rather than specific surface tokens. Our framework unifies prior findings on Aha moments and post-training experiments, and offers insights for future reasoning model design.