🤖 AI Summary
Reinforcement learning (RL) for reasoning-oriented large language models (LLMs) often overlooks the value of low-entropy segments—semantically stable, deterministic reasoning steps—despite their critical role in correctness. Method: This paper proposes LESS, a correctness-aware fine-grained advantage shaping framework. Its core innovation is the empirical discovery that overlap of low-entropy segments across correct responses strongly correlates with final answer accuracy. Leveraging this insight, LESS introduces a segment-level advantage modulation mechanism: amplifying advantages for low-entropy segments in correct trajectories, suppressing them in incorrect ones, and neutralizing them in ambiguous cases—thereby preserving high-entropy exploratory capacity while enabling precise control. Contribution/Results: Integrated into the GRPO framework with verifiable rewards (RLVR), LESS achieves significant improvements over strong baselines across three backbone models and six mathematical reasoning benchmarks, yielding substantial average accuracy gains and enhanced robustness to performance degradation.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) has become a central approach for improving the reasoning ability of large language models. Recent work studies RLVR through token entropy, arguing that high-entropy tokens drive exploration and should receive stronger updates. However, they overlook the fact that most of a reasoning trajectory consists of low-entropy segments that encode stable and reusable structural patterns. Through qualitative and quantitative analyses, we find that the overlap of low-entropy segments across correct responses strongly correlates with model accuracy, while overlaps involving incorrect responses exhibit stable but unproductive patterns. Motivated by these findings, we propose LESS, a correctness-aware reinforcement framework that performs fine-grained advantage modulation over low-entropy segments. LESS amplifies segments unique to correct responses, suppresses those unique to incorrect ones, and neutralizes segments shared by both, while preserving high-entropy exploration in the underlying RL algorithm. Instantiated on top of the popular GRPO, LESS consistently improves accuracy over strong RL baselines across three backbones and six math benchmarks, achieves stronger robustness of the performance floor.