🤖 AI Summary
In reinforcement learning (RL) with large language models (LLMs), premature convergence to suboptimal policies—termed *entropy collapse*—degrades reasoning capability, response diversity, and probability calibration. This work identifies *positive-advantage tokens* as the primary driver of entropy collapse and demonstrates that off-policy update frequency, data diversity, and optimization clipping thresholds critically govern entropy dynamics. To address this, we propose an *advantage-aware weighted loss function* that explicitly controls entropy by differentially scaling gradient weights for positive- versus negative-advantage tokens. Our method integrates off-policy updates with a verifiable reward mechanism. Evaluated across multiple benchmarks—including mathematical reasoning and code generation—it significantly mitigates entropy collapse, improves response diversity by +23.6%, and boosts task performance by an average of +5.8%. The approach yields a reproducible, plug-and-play RL training paradigm for LLMs.
📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has emerged as a predominant approach for enhancing the reasoning capabilities of large language models (LLMs). However, the entropy of LLMs usually collapses during RLVR training, causing premature convergence to suboptimal local minima and hinder further performance improvement. Although various approaches have been proposed to mitigate entropy collapse, a comprehensive study of entropy in RLVR remains lacking. To address this gap, we conduct extensive experiments to investigate the entropy dynamics of LLMs trained with RLVR and analyze how model entropy correlates with response diversity, calibration, and performance across various benchmarks. Our findings reveal that the number of off-policy updates, the diversity of training data, and the clipping thresholds in the optimization objective are critical factors influencing the entropy of LLMs trained with RLVR. Moreover, we theoretically and empirically demonstrate that tokens with positive advantages are the primary contributors to entropy collapse, and that model entropy can be effectively regulated by adjusting the relative loss weights of tokens with positive and negative advantages during training.