ENTRA: Entropy-Based Redundancy Avoidance in Large Language Model Reasoning

📅 2026-01-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the tendency of large reasoning models to generate verbose and repetitive reasoning chains on simple tasks, which incurs high computational costs with limited performance gains. To mitigate this inefficiency, the authors propose ENTRA, a novel framework that introduces Bidirectional Importance Estimation (BIE) to quantify token-level importance. Leveraging the normalized entropy of low-importance tokens, ENTRA constructs a redundancy-aware reward signal and employs entropy-driven reinforcement learning to achieve fine-grained and generalizable suppression of redundant content. Evaluated across multiple mathematical reasoning benchmarks, ENTRA reduces output length by 37%–53% while maintaining or even improving reasoning accuracy.

Technology Category

Application Category

📝 Abstract
Large Reasoning Models (LRMs) often suffer from overthinking, generating unnecessarily long reasoning chains even for simple tasks. This leads to substantial computational overhead with limited performance gain, primarily due to redundant verification and repetitive generation. While prior work typically constrains output length or optimizes correctness, such coarse supervision fails to guide models toward concise yet accurate inference. In this paper, we propose ENTRA, an entropy-based training framework that suppresses redundant reasoning while preserving performance. ENTRA first estimates the token-level importance using a lightweight Bidirectional Importance Estimation (BIE) method, which accounts for both prediction confidence and forward influence. It then computes a redundancy reward based on the entropy of low-importance tokens, normalized by its theoretical upper bound, and optimizes this reward via reinforcement learning. Experiments on mathematical reasoning benchmarks demonstrate that ENTRA reduces output length by 37% to 53% with no loss-and in some cases, gains-in accuracy. Our approach offers a principled and efficient solution to reduce overthinking in LRMs, and provides a generalizable path toward redundancy-aware reasoning optimization.
Problem

Research questions and friction points this paper is trying to address.

overthinking
redundancy
large reasoning models
reasoning efficiency
computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

entropy-based reasoning
redundancy avoidance
token-level importance
reinforcement learning
Large Reasoning Models
🔎 Similar Papers
No similar papers found.