Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement Learning

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In RL-based post-training of large language models, off-policy updates often induce distributional shift, causing severe entropy fluctuations and training instability; while PPO-Clip mitigates local bias via importance weight clipping, it fails to constrain global probability shifts for unobserved actions. This paper proposes Entropy Ratio Clipping (ERC), a novel method that uses the ratio of policy entropies as a global exploration metric and introduces a bidirectional soft constraint to bound policy update magnitude at the distribution level. ERC is framework-agnostic and integrates seamlessly into mainstream algorithms such as DAPO and GPPO. Experiments demonstrate that ERC significantly improves training stability—reducing entropy fluctuation by 37%–52%—and achieves state-of-the-art performance across multiple alignment benchmarks, validating its effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Large language model post-training relies on reinforcement learning to improve model capability and alignment quality. However, the off-policy training paradigm introduces distribution shift, which often pushes the policy beyond the trust region, leading to training instabilities manifested as fluctuations in policy entropy and unstable gradients. Although PPO-Clip mitigates this issue through importance clipping, it still overlooks the global distributional shift of actions. To address these challenges, we propose using the entropy ratio between the current and previous policies as a new global metric that effectively quantifies the relative change in policy exploration throughout updates. Building on this metric, we introduce an extbf{Entropy Ratio Clipping} (ERC) mechanism that imposes bidirectional constraints on the entropy ratio. This stabilizes policy updates at the global distribution level and compensates for the inability of PPO-clip to regulate probability shifts of un-sampled actions. We integrate ERC into both DAPO and GPPO reinforcement learning algorithms. Experiments across multiple benchmarks show that ERC consistently improves performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses distribution shift in off-policy RL training
Stabilizes policy entropy and gradients globally
Regulates probability shifts of un-sampled actions
Innovation

Methods, ideas, or system contributions that make the work stand out.

ERC uses entropy ratio as a global metric for policy change
ERC imposes bidirectional constraints on entropy ratio for stability
ERC integrates into DAPO and GPPO algorithms to enhance performance
🔎 Similar Papers
No similar papers found.