Compress the Easy, Explore the Hard: Difficulty-Aware Entropy Regularization for Efficient LLM Reasoning

📅 2026-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high latency and computational cost of large language models in complex reasoning, which often stem from lengthy explicit reasoning chains, while existing compression methods frequently degrade reasoning performance—especially on difficult problems. The authors propose CEEH, a novel approach that introduces, for the first time, a difficulty-aware entropy regularization mechanism within a reinforcement learning framework to dynamically distinguish problem difficulty: compressing responses for easy questions while preserving high-entropy exploration space for hard ones. Additionally, CEEH incorporates a dynamic length penalty strategy based on the shortest correct response observed in history, effectively mitigating entropy collapse and length inflation. Experiments demonstrate that CEEH significantly reduces response length across six reasoning benchmarks while maintaining accuracy comparable to the original model and outperforming length-only optimization baselines under Pass@k metrics.

Technology Category

Application Category

📝 Abstract
Chain-of-Thought (CoT) has substantially empowered Large Language Models (LLMs) to tackle complex reasoning tasks, yet the verbose nature of explicit reasoning steps incurs prohibitive inference latency and computational costs, limiting real-world deployment. While existing compression methods - ranging from self-training to Reinforcement Learning (RL) with length constraints - attempt to mitigate this, they often sacrifice reasoning capability for brevity. We identify a critical failure mode in these approaches: explicitly optimizing for shorter trajectories triggers rapid entropy collapse, which prematurely shrinks the exploration space and stifles the discovery of valid reasoning paths, particularly for challenging questions requiring extensive deduction. To address this issue, we propose Compress responses for Easy questions and Explore Hard ones (CEEH), a difficulty-aware approach to RL-based efficient reasoning. CEEH dynamically assesses instance difficulty to apply selective entropy regularization: it preserves a diverse search space for currently hard questions to ensure robustness, while permitting aggressive compression on easier instances where the reasoning path is well-established. In addition, we introduce a dynamic optimal-length penalty anchored to the historically shortest correct response, which effectively counteracts entropy-induced length inflation and stabilizes the reward signal. Across six reasoning benchmarks, CEEH consistently reduces response length while maintaining accuracy comparable to the base model, and improves Pass@k relative to length-only optimization.
Problem

Research questions and friction points this paper is trying to address.

Chain-of-Thought
reasoning efficiency
entropy collapse
response compression
difficulty-aware
Innovation

Methods, ideas, or system contributions that make the work stand out.

difficulty-aware
entropy regularization
efficient reasoning
Chain-of-Thought compression
reinforcement learning
🔎 Similar Papers
No similar papers found.
Q
Qin-Wen Luo
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics
S
Sheng Ren
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics
X
Xiang Chen
College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics; MIIT Key Laboratory of Pattern Analysis and Machine Intelligence
R
Rui Liu
Didi International Business Group
J
Jun Fang
Didi International Business Group
N
Naiqiang Tan
Didi International Business Group
Sheng-Jun Huang
Sheng-Jun Huang
Nanjing University of Aeronautics & Astronautics
Machine Learning