BAPO: Stabilizing Off-Policy Reinforcement Learning for LLMs via Balanced Policy Optimization with Adaptive Clipping

๐Ÿ“… 2025-10-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Large language models (LLMs) suffer from rapid policy entropy decay, training instability, and even collapse in off-policy reinforcement learningโ€”caused by gradient explosion from dominance of negative-advantage samples in policy gradients and systematic entropy suppression due to fixed PPO clipping, which impairs exploration. Method: We propose Balanced Policy Optimization (BPO), a framework featuring an adaptive clipping mechanism that dynamically adjusts the clipping bounds to balance gradient contributions from positive- and negative-advantage samples, thereby ensuring stable policy updates while preserving sufficient exploration. Grounded in theoretical analysis, BPO supports off-policy sample replay and partial rollout training. Results: On the AIME 2024/2025 benchmarks, BPO achieves state-of-the-art performance for both 7B and 32B LLMs; notably, the 32B variant significantly outperforms mainstream systems of comparable scale, including o3-mini and Gemini-2.5-Flash-Thinking.

Technology Category

Application Category

๐Ÿ“ Abstract
Reinforcement learning (RL) has recently become the core paradigm for aligning and strengthening large language models (LLMs). Yet, applying RL in off-policy settings--where stale data from past policies are used for training--improves sample efficiency, but remains challenging: policy entropy declines sharply, optimization often becomes unstable and may even collapse. Through theoretical and empirical analysis, we identify two key insights: (i) an imbalance in optimization, where negative-advantage samples dominate the policy gradient, suppressing useful behaviors and risking gradient explosions; and (ii) the derived Entropy-Clip Rule, which reveals that the fixed clipping mechanism in PPO-like objectives systematically blocks entropy-increasing updates, thereby driving the policy toward over-exploitation at the expense of exploration. Building on these insights, we propose BAlanced Policy Optimization with Adaptive Clipping (BAPO), a simple yet effective method that dynamically adjusts clipping bounds to adaptively re-balance positive and negative contributions, preserve entropy, and stabilize RL optimization. Across diverse off-policy scenarios--including sample replay and partial rollout--BAPO achieves fast, stable, and data-efficient training. On AIME 2024 and AIME 2025 benchmarks, our 7B BAPO model surpasses open-source counterparts such as SkyWork-OR1-7B, while our 32B BAPO model not only achieves state-of-the-art results among models of the same scale but also outperforms leading proprietary systems like o3-mini and Gemini-2.5-Flash-Thinking.
Problem

Research questions and friction points this paper is trying to address.

Addresses policy entropy collapse in off-policy RL for LLMs
Solves gradient imbalance from dominant negative-advantage samples
Fixes systematic blocking of entropy-increasing updates in PPO
Innovation

Methods, ideas, or system contributions that make the work stand out.

Balanced Policy Optimization with Adaptive Clipping
Dynamically adjusts clipping bounds adaptively
Preserves entropy and stabilizes RL optimization
๐Ÿ”Ž Similar Papers
No similar papers found.
Zhiheng Xi
Zhiheng Xi
Fudan University
LLM ReasoningLLM-based Agents
X
Xin Guo
Fudan University
Y
Yang Nan
Fudan University
E
Enyu Zhou
Fudan University
J
Junrui Shen
Fudan University
Wenxiang Chen
Wenxiang Chen
Fudan University
LLM reasoningLLM-based agent
J
Jiaqi Liu
Fudan University
J
Jixuan Huang
Fudan University
Z
Zhihao Zhang
Fudan University
Honglin Guo
Honglin Guo
Fudan University
Large Language Model
X
Xun Deng
Shanghai Qiji Zhifeng Co., Ltd.
Z
Zhikai Lei
Shanghai Qiji Zhifeng Co., Ltd.
Miao Zheng
Miao Zheng
Shanghai Qiji Zhifeng Co., Ltd.
G
Guoteng Wang
Shanghai Qiji Zhifeng Co., Ltd.
S
Shuo Zhang
Shanghai Qiji Zhifeng Co., Ltd.
P
Peng Sun
Shanghai Qiji Zhifeng Co., Ltd.
R
Rui Zheng
Shanghai Qiji Zhifeng Co., Ltd.
H
Hang Yan
Shanghai Qiji Zhifeng Co., Ltd.
T
Tao Gui
Fudan University; Shanghai Innovation Institute
Q
Qi Zhang
Fudan University; Shanghai Innovation Institute
X
Xuanjing Huang
Fudan University