๐ค AI Summary
Large language models (LLMs) suffer from rapid policy entropy decay, training instability, and even collapse in off-policy reinforcement learningโcaused by gradient explosion from dominance of negative-advantage samples in policy gradients and systematic entropy suppression due to fixed PPO clipping, which impairs exploration.
Method: We propose Balanced Policy Optimization (BPO), a framework featuring an adaptive clipping mechanism that dynamically adjusts the clipping bounds to balance gradient contributions from positive- and negative-advantage samples, thereby ensuring stable policy updates while preserving sufficient exploration. Grounded in theoretical analysis, BPO supports off-policy sample replay and partial rollout training.
Results: On the AIME 2024/2025 benchmarks, BPO achieves state-of-the-art performance for both 7B and 32B LLMs; notably, the 32B variant significantly outperforms mainstream systems of comparable scale, including o3-mini and Gemini-2.5-Flash-Thinking.
๐ Abstract
Reinforcement learning (RL) has recently become the core paradigm for aligning and strengthening large language models (LLMs). Yet, applying RL in off-policy settings--where stale data from past policies are used for training--improves sample efficiency, but remains challenging: policy entropy declines sharply, optimization often becomes unstable and may even collapse. Through theoretical and empirical analysis, we identify two key insights: (i) an imbalance in optimization, where negative-advantage samples dominate the policy gradient, suppressing useful behaviors and risking gradient explosions; and (ii) the derived Entropy-Clip Rule, which reveals that the fixed clipping mechanism in PPO-like objectives systematically blocks entropy-increasing updates, thereby driving the policy toward over-exploitation at the expense of exploration. Building on these insights, we propose BAlanced Policy Optimization with Adaptive Clipping (BAPO), a simple yet effective method that dynamically adjusts clipping bounds to adaptively re-balance positive and negative contributions, preserve entropy, and stabilize RL optimization. Across diverse off-policy scenarios--including sample replay and partial rollout--BAPO achieves fast, stable, and data-efficient training. On AIME 2024 and AIME 2025 benchmarks, our 7B BAPO model surpasses open-source counterparts such as SkyWork-OR1-7B, while our 32B BAPO model not only achieves state-of-the-art results among models of the same scale but also outperforms leading proprietary systems like o3-mini and Gemini-2.5-Flash-Thinking.