EDIS: Diagnosing LLM Reasoning via Entropy Dynamics

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in current large language model (LLM) reasoning approaches, which treat confidence as a static quantity and overlook the dynamic evolution of token-level entropy during generation. The study reveals, for the first time, consistent anomalous entropy dynamics—such as abrupt spikes and valley rebounds—in erroneous reasoning trajectories. To leverage this insight, the authors propose EDIS, a trajectory-level instability metric that captures reasoning errors through time-series analysis and entropy dynamics modeling. EDIS serves as a general-purpose signal for both inference-time selection and training data curation, demonstrably improving reasoning accuracy across diverse models and training stages. These results establish the universality and practical utility of entropy dynamics as a diagnostic tool for LLM reasoning.

Technology Category

Application Category

📝 Abstract
Entropy-based confidence signals are increasingly leveraged to improve reasoning in large language models (LLMs), yet existing approaches treat confidence as a static quantity -- typically aggregated over tokens. We show that the \emph{temporal evolution} of confidence during generation carries richer information than aggregate statistics alone. Analyzing token-level entropy trajectories, we identify characteristic patterns distinguishing correct from incorrect reasoning: erroneous solutions exhibit unstable dynamics, including burst spikes (sustained uncertainty growth) and peak-valley spikes (sharp rebounds following transient confidence). These patterns persist across models and training stages, suggesting they reflect intrinsic properties of reasoning failure rather than superficial noise. To formalize this observation, we introduce the Entropy Dynamics Instability Score (\textbf{EDIS}), a trajectory-level metric quantifying instability in entropy evolution. EDIS serves as an effective diagnostic signal for inference-time selection, substantially improving reasoning accuracy, and offers a promising direction for training-time sample curation. Our findings establish entropy dynamics as an underexplored yet informative lens for understanding and improving LLM reasoning.
Problem

Research questions and friction points this paper is trying to address.

entropy dynamics
LLM reasoning
confidence instability
reasoning diagnosis
token-level entropy
Innovation

Methods, ideas, or system contributions that make the work stand out.

entropy dynamics
reasoning diagnosis
instability patterns
EDIS
token-level entropy
🔎 Similar Papers
No similar papers found.
C
Chenghua Zhu
South China Normal University, Guangzhou, China
S
Siyan Wu
South China Normal University, Guangzhou, China
X
Xiangkang Zeng
Sun Yat-sen University, Guangzhou, China
Zishan Xu
Zishan Xu
Tsinghua University
Z
Zhaolu Kang
Peking University, Beijing, China
Y
Yifu Guo
Sun Yat-sen University, Guangzhou, China
Y
Yuquan Lu
South China Normal University, Guangzhou, China
J
Junduan Huang
South China Normal University, Guangzhou, China
G
Guojing Zhou
South China Normal University, Guangzhou, China