🤖 AI Summary
This paper identifies a novel cause of entropy collapse in verifiable reward reinforcement learning (RLVR): the clipping mechanisms in PPO and GRPO inherently induce entropy bias—clip-low increases entropy and encourages exploration, whereas clip-high suppresses entropy and accelerates convergence. Under standard hyperparameters, clip-high dominates, leading to persistent entropy decay—even under random rewards—introducing a reward-agnostic confounding factor.
Method: To counteract premature convergence, the authors propose actively amplifying clip-low to regulate policy entropy.
Contribution/Results: Theoretical analysis and empirical evaluation demonstrate that this intervention substantially mitigates entropy collapse, improves long-horizon reasoning stability, and enhances generalization. It offers a principled, actionable mechanism for balancing exploration and exploitation in large language model (LLM) reinforcement learning—advancing both interpretability and controllability of policy optimization dynamics.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has recently emerged as the leading approach for enhancing the reasoning capabilities of large language models (LLMs). However, RLVR is prone to entropy collapse, where the LLM quickly converges to a near-deterministic form, hindering exploration and progress during prolonged RL training. In this work, we reveal that the clipping mechanism in PPO and GRPO induces biases on entropy. Through theoretical and empirical analyses, we show that clip-low increases entropy, while clip-high decreases it. Further, under standard clipping parameters, the effect of clip-high dominates, resulting in an overall entropy reduction even when purely random rewards are provided to the RL algorithm. Our findings highlight an overlooked confounding factor in RLVR: independent of the reward signal, the clipping mechanism influences entropy, which in turn affects the reasoning behavior. Furthermore, our analysis demonstrates that clipping can be deliberately used to control entropy. Specifically, with a more aggressive clip-low value, one can increase entropy, promote exploration, and ultimately prevent entropy collapse in RLVR training.