🤖 AI Summary
This work addresses the long-standing lack of rigorous convergence theory for Proximal Policy Optimization (PPO) and clarifies the theoretical underpinnings of its multi-epoch minibatch update mechanism. The authors interpret PPO updates as an approximate policy gradient ascent procedure with controlled bias and, by incorporating stochastic reshuffling techniques, establish the first convergence proof framework for PPO under standard assumptions. Furthermore, they identify a weight collapse issue in truncated Generalized Advantage Estimation (GAE) at episode boundaries and propose a corrective modification. Both theoretical analysis and empirical evaluation demonstrate that the proposed correction significantly enhances PPO’s performance in environments with strong terminal signals, such as Lunar Lander.
📝 Abstract
Proximal Policy Optimization (PPO) is among the most widely used deep reinforcement learning algorithms, yet its theoretical foundations remain incomplete. Most importantly, convergence and understanding of fundamental PPO advantages remain widely open. Under standard theory assumptions we show how PPO's policy update scheme (performing multiple epochs of minibatch updates on multi-use rollouts with a surrogate gradient) can be interpreted as approximated policy gradient ascent. We show how to control the bias accumulated by the surrogate gradients and use techniques from random reshuffling to prove a convergence theorem for PPO that sheds light on PPO's success. Additionally, we identify a previously overlooked issue in truncated Generalized Advantage Estimation commonly used in PPO. The geometric weighting scheme induces infinite mass collapse onto the longest $k$-step advantage estimator at episode boundaries. Empirical evaluations show that a simple weight correction can yield substantial improvements in environments with strong terminal signal, such as Lunar Lander.