🤖 AI Summary
This work addresses a critical limitation in current large language model (LLM) reasoning approaches, which treat confidence as a static quantity and overlook the dynamic evolution of token-level entropy during generation. The study reveals, for the first time, consistent anomalous entropy dynamics—such as abrupt spikes and valley rebounds—in erroneous reasoning trajectories. To leverage this insight, the authors propose EDIS, a trajectory-level instability metric that captures reasoning errors through time-series analysis and entropy dynamics modeling. EDIS serves as a general-purpose signal for both inference-time selection and training data curation, demonstrably improving reasoning accuracy across diverse models and training stages. These results establish the universality and practical utility of entropy dynamics as a diagnostic tool for LLM reasoning.
📝 Abstract
Entropy-based confidence signals are increasingly leveraged to improve reasoning in large language models (LLMs), yet existing approaches treat confidence as a static quantity -- typically aggregated over tokens. We show that the \emph{temporal evolution} of confidence during generation carries richer information than aggregate statistics alone. Analyzing token-level entropy trajectories, we identify characteristic patterns distinguishing correct from incorrect reasoning: erroneous solutions exhibit unstable dynamics, including burst spikes (sustained uncertainty growth) and peak-valley spikes (sharp rebounds following transient confidence). These patterns persist across models and training stages, suggesting they reflect intrinsic properties of reasoning failure rather than superficial noise. To formalize this observation, we introduce the Entropy Dynamics Instability Score (\textbf{EDIS}), a trajectory-level metric quantifying instability in entropy evolution. EDIS serves as an effective diagnostic signal for inference-time selection, substantially improving reasoning accuracy, and offers a promising direction for training-time sample curation. Our findings establish entropy dynamics as an underexplored yet informative lens for understanding and improving LLM reasoning.