🤖 AI Summary
This work addresses the high latency and computational cost of large language models in complex reasoning, which often stem from lengthy explicit reasoning chains, while existing compression methods frequently degrade reasoning performance—especially on difficult problems. The authors propose CEEH, a novel approach that introduces, for the first time, a difficulty-aware entropy regularization mechanism within a reinforcement learning framework to dynamically distinguish problem difficulty: compressing responses for easy questions while preserving high-entropy exploration space for hard ones. Additionally, CEEH incorporates a dynamic length penalty strategy based on the shortest correct response observed in history, effectively mitigating entropy collapse and length inflation. Experiments demonstrate that CEEH significantly reduces response length across six reasoning benchmarks while maintaining accuracy comparable to the original model and outperforming length-only optimization baselines under Pass@k metrics.
📝 Abstract
Chain-of-Thought (CoT) has substantially empowered Large Language Models (LLMs) to tackle complex reasoning tasks, yet the verbose nature of explicit reasoning steps incurs prohibitive inference latency and computational costs, limiting real-world deployment. While existing compression methods - ranging from self-training to Reinforcement Learning (RL) with length constraints - attempt to mitigate this, they often sacrifice reasoning capability for brevity. We identify a critical failure mode in these approaches: explicitly optimizing for shorter trajectories triggers rapid entropy collapse, which prematurely shrinks the exploration space and stifles the discovery of valid reasoning paths, particularly for challenging questions requiring extensive deduction. To address this issue, we propose Compress responses for Easy questions and Explore Hard ones (CEEH), a difficulty-aware approach to RL-based efficient reasoning. CEEH dynamically assesses instance difficulty to apply selective entropy regularization: it preserves a diverse search space for currently hard questions to ensure robustness, while permitting aggressive compression on easier instances where the reasoning path is well-established. In addition, we introduce a dynamic optimal-length penalty anchored to the historically shortest correct response, which effectively counteracts entropy-induced length inflation and stabilizes the reward signal. Across six reasoning benchmarks, CEEH consistently reduces response length while maintaining accuracy comparable to the base model, and improves Pass@k relative to length-only optimization.