🤖 AI Summary
This work addresses the tendency of large language models to generate verbose and repetitive intermediate steps during complex reasoning, a limitation exacerbated by existing approaches that optimize only for final answer length and are thus prone to reward hacking. The study introduces information density as a core metric for reasoning quality, leveraging conditional entropy analysis to reveal that high-quality reasoning exhibits low-uncertainty convergence and monotonic progress. Building on these insights, the authors design a reinforcement learning reward mechanism that integrates area under the curve (AUC), monotonicity constraints, and a length-scaling term. Evaluated on mathematical reasoning benchmarks, the proposed method achieves state-of-the-art or comparable accuracy while substantially reducing token consumption, thereby striking an effective balance between accuracy and computational efficiency.
📝 Abstract
Large Language Models (LLMs) with extended reasoning capabilities often generate verbose and redundant reasoning traces, incurring unnecessary computational cost. While existing reinforcement learning approaches address this by optimizing final response length, they neglect the quality of intermediate reasoning steps, leaving models vulnerable to reward hacking. We argue that verbosity is not merely a length problem, but a symptom of poor intermediate reasoning quality. To investigate this, we conduct an empirical study tracking the conditional entropy of the answer distribution across reasoning steps. We find that high-quality reasoning traces exhibit two consistent properties: low uncertainty convergence and monotonic progress. These findings suggest that high-quality reasoning traces are informationally dense, that is, each step contributes meaningful entropy reduction relative to the total reasoning length. Motivated by this, we propose InfoDensity, a reward framework for RL training that combines an AUC-based reward and a monotonicity reward as a unified measure of reasoning quality, weighted by a length scaling term that favors achieving equivalent quality more concisely. Experiments on mathematical reasoning benchmarks demonstrate that InfoDensity matches or surpasses state-of-the-art baselines in accuracy while significantly reducing token usage, achieving a strong accuracy-efficiency trade-off.