EntroPIC: Towards Stable Long-Term Training of LLMs via Entropy Stabilization with Proportional-Integral Control

๐Ÿ“… 2025-11-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
During prolonged reinforcement learning (RL) training of large language models (LLMs), the co-occurrence of positive and negative samples induces entropy dynamics instability, leading to diminished exploration and performance degradation. To address this, we propose a Proportional-Integral (PI)-based entropy stabilization mechanismโ€”the first application of classical control theory to LLM RL training. Our method dynamically adjusts the loss weights assigned to positive and negative samples to adaptively maintain a target entropy level, without modifying the policy network architecture. It is compatible with both on-policy and off-policy RL frameworks. Experiments demonstrate that our approach significantly improves training stability, accelerates convergence, and consistently outperforms existing entropy-constrained baselines across multiple tasks. By providing an interpretable, robust, and plug-and-play control paradigm, this work advances the feasibility and reliability of long-horizon RL training for LLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
Long-term training of large language models (LLMs) requires maintaining stable exploration to prevent the model from collapsing into sub-optimal behaviors. Entropy is crucial in this context, as it controls exploration and helps avoid premature convergence to sub-optimal solutions. However, existing reinforcement learning methods struggle to maintain an appropriate level of entropy, as the training process involves a mix of positive and negative samples, each affecting entropy in different ways across steps. To address this, we propose Entropy stablilization via Proportional-Integral Control (EntroPIC), a novel method that adaptively adjusts the influence of positive and negative samples by dynamically tuning their loss coefficients. This approach stabilizes entropy throughout training, ensuring efficient exploration and steady progress. We provide a comprehensive theoretical analysis for both on-policy and off-policy learning settings, demonstrating that EntroPIC is effective at controlling entropy in large-scale LLM training. Experimental results show that our method successfully maintains desired entropy levels, enabling stable and optimal RL training for LLMs.
Problem

Research questions and friction points this paper is trying to address.

Maintaining stable exploration during long-term LLM training
Preventing entropy collapse from mixed positive/negative samples
Adaptively controlling entropy via dynamic loss coefficient tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stabilizes entropy via proportional-integral control
Dynamically tunes loss coefficients for samples
Ensures stable exploration in long-term training
๐Ÿ”Ž Similar Papers
No similar papers found.