๐ค AI Summary
During prolonged reinforcement learning (RL) training of large language models (LLMs), the co-occurrence of positive and negative samples induces entropy dynamics instability, leading to diminished exploration and performance degradation. To address this, we propose a Proportional-Integral (PI)-based entropy stabilization mechanismโthe first application of classical control theory to LLM RL training. Our method dynamically adjusts the loss weights assigned to positive and negative samples to adaptively maintain a target entropy level, without modifying the policy network architecture. It is compatible with both on-policy and off-policy RL frameworks. Experiments demonstrate that our approach significantly improves training stability, accelerates convergence, and consistently outperforms existing entropy-constrained baselines across multiple tasks. By providing an interpretable, robust, and plug-and-play control paradigm, this work advances the feasibility and reliability of long-horizon RL training for LLMs.
๐ Abstract
Long-term training of large language models (LLMs) requires maintaining stable exploration to prevent the model from collapsing into sub-optimal behaviors. Entropy is crucial in this context, as it controls exploration and helps avoid premature convergence to sub-optimal solutions. However, existing reinforcement learning methods struggle to maintain an appropriate level of entropy, as the training process involves a mix of positive and negative samples, each affecting entropy in different ways across steps. To address this, we propose Entropy stablilization via Proportional-Integral Control (EntroPIC), a novel method that adaptively adjusts the influence of positive and negative samples by dynamically tuning their loss coefficients. This approach stabilizes entropy throughout training, ensuring efficient exploration and steady progress. We provide a comprehensive theoretical analysis for both on-policy and off-policy learning settings, demonstrating that EntroPIC is effective at controlling entropy in large-scale LLM training. Experimental results show that our method successfully maintains desired entropy levels, enabling stable and optimal RL training for LLMs.