🤖 AI Summary
Existing reinforcement learning (RL) post-training methods for large language models (LLMs) rely heavily on heuristic designs, lacking rigorous theoretical foundations—thereby limiting training stability and performance gains.
Method: We propose the first unified theoretical framework for LLM RL post-training. Grounded in statistical analysis, we formally characterize the signal-to-noise ratio (SNR) of policy gradient estimators and prove that SNR provides principled guidance for adaptive learning rate scheduling. We further derive a variance-optimal gradient-weighted baseline, enabling joint optimization of learning rates and baselines. Leveraging signal-noise decomposition, variance analysis, and convergence upper-bound derivation, we design an adaptive algorithm with provable convergence guarantees.
Results: Experiments on Qwen3-4B and Qwen3-8B-Base demonstrate substantial improvements over state-of-the-art policy optimization methods, empirically validating the efficacy of theory-driven design in large-scale LLM post-training.
📝 Abstract
Existing reinforcement learning (RL)-based post-training methods for large language models have advanced rapidly, yet their design has largely been guided by heuristics rather than systematic theoretical principles. This gap limits our understanding of the properties of the gradient estimators and the associated optimization algorithms, thereby constraining opportunities to improve training stability and overall performance. In this work, we provide a unified theoretical framework that characterizes the statistical properties of commonly used policy-gradient estimators under mild assumptions. Our analysis establishes unbiasedness, derives exact variance expressions, and yields an optimization-loss upper bound that enables principled reasoning about learning dynamics. Building on these results, we prove convergence guarantees and derive an adaptive learning-rate schedule governed by the signal-to-noise ratio (SNR) of gradients. We further show that the variance-optimal baseline is a gradient-weighted estimator, offering a new principle for variance reduction and naturally enhancing stability beyond existing methods. These insights motivate Optimal Baseline and Learning-Rate Policy Optimization (OBLR-PO), an algorithm that jointly adapts learning rates and baselines in a theoretically grounded manner. Experiments on Qwen3-4B-Base and Qwen3-8B-Base demonstrate consistent gains over existing policy optimization methods, validating that our theoretical contributions translate into practical improvements in large-scale post-training.