🤖 AI Summary
This work addresses the instability and inefficiency in training large vocabulary language models with Proximal Policy Optimization (PPO), which stems from its ratio-clipping mechanism excessively suppressing updates to low-probability tokens while inadequately constraining high-probability ones. To overcome this limitation, the authors propose Divergence Proximal Policy Optimization (DPPO), a theoretically grounded approach that replaces heuristic clipping with direct estimates of policy divergence—such as total variation (TV) or Kullback–Leibler (KL) divergence—to enforce trust-region constraints. Efficient computation is achieved through Binary and Top-K sparse approximations of the divergence measures. Empirical results across multiple tasks demonstrate that DPPO substantially improves training stability and sample efficiency, offering a more robust optimization framework for reinforcement learning–based fine-tuning of large language models.
📝 Abstract
Reinforcement learning (RL) has become a cornerstone for fine-tuning Large Language Models (LLMs), with Proximal Policy Optimization (PPO) serving as the de facto standard algorithm. Despite its ubiquity, we argue that the core ratio clipping mechanism in PPO is structurally ill-suited for the large vocabularies inherent to LLMs. PPO constrains policy updates based on the probability ratio of sampled tokens, which serves as a noisy single-sample Monte Carlo estimate of the true policy divergence. This creates a sub-optimal learning dynamic: updates to low-probability tokens are aggressively over-penalized, while potentially catastrophic shifts in high-probability tokens are under-constrained, leading to training inefficiency and instability. To address this, we propose Divergence Proximal Policy Optimization (DPPO), which substitutes heuristic clipping with a more principled constraint based on a direct estimate of policy divergence (e.g., Total Variation or KL). To avoid huge memory footprint, we introduce the efficient Binary and Top-K approximations to capture the essential divergence with negligible overhead. Extensive empirical evaluations demonstrate that DPPO achieves superior training stability and efficiency compared to existing methods, offering a more robust foundation for RL-based LLM fine-tuning.