Rediscovering Entropy Regularization: Adaptive Coefficient Unlocks Its Potential for LLM Reinforcement Learning

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) in reinforcement learning with verifiable rewards (RLVR) commonly suffer from premature policy entropy collapse—degenerating into deterministic policies that impair exploration and reasoning. Method: We propose Adaptive Entropy Regularization (AER), a framework that dynamically balances exploration and exploitation via three mechanisms: (1) difficulty-aware entropy coefficient assignment, (2) initial anchoring to a target entropy, and (3) global policy entropy–driven adaptive coefficient adjustment. Contribution/Results: AER overcomes the sensitivity and instability of fixed entropy coefficients across diverse tasks and LLMs. Evaluated on multiple mathematical reasoning benchmarks, AER significantly improves reasoning accuracy, effectively mitigates entropy collapse, and enhances policy diversity and generalization. It provides a scalable, robust solution for exploration in LLM-based RL, advancing reliability and adaptability in verifiable reward settings.

Technology Category

Application Category

📝 Abstract
Reasoning ability has become a defining capability of Large Language Models (LLMs), with Reinforcement Learning with Verifiable Rewards (RLVR) emerging as a key paradigm to enhance it. However, RLVR training often suffers from policy entropy collapse, where the policy becomes overly deterministic, hindering exploration and limiting reasoning performance. While entropy regularization is a common remedy, its effectiveness is highly sensitive to the fixed coefficient, making it unstable across tasks and models. In this work, we revisit entropy regularization in RLVR and argue that its potential has been largely underestimated. Our analysis shows that (i) tasks of varying difficulty demand distinct exploration intensities, and (ii) balanced exploration may require the policy entropy to be maintained within a moderate range below its initial level. Therefore, we propose Adaptive Entropy Regularization (AER)--a framework that dynamically balances exploration and exploitation via three components: difficulty-aware coefficient allocation, initial-anchored target entropy, and dynamic global coefficient adjustment. Experiments on multiple mathematical reasoning benchmarks show that AER consistently outperforms baselines, improving both reasoning accuracy and exploration capability.
Problem

Research questions and friction points this paper is trying to address.

RLVR training suffers from policy entropy collapse
Fixed entropy regularization coefficient causes instability
Difficulty-aware adaptive entropy balancing is needed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive coefficient balances exploration and exploitation
Difficulty-aware allocation adjusts regularization strength dynamically
Initial-anchored entropy maintains moderate policy entropy range
🔎 Similar Papers
No similar papers found.
X
Xiaoyun Zhang
State Key Lab of Processors, Institute of Computing Technology, CAS
X
Xiaojian Yuan
University of Science and Technology of China
D
Di Huang
State Key Lab of Processors, Institute of Computing Technology, CAS
W
Wang You
StepFun Inc
Chen Hu
Chen Hu
School of Artificial Intelligence and Computer Science, Jiangnan University
Geometric Deep LearningMachine Learning
J
Jingqing Ruan
University of Chinese Academy of Sciences
Kejiang Chen
Kejiang Chen
Department of Electronic Engineering and Information Science, University of Science and Technology
information hiding,steganography,privacy-preserving
X
Xing Hu
State Key Lab of Processors, Institute of Computing Technology, CAS