🤖 AI Summary
This work addresses the training instability of long-horizon large language models in reinforcement learning, where gradient variance often explodes and leads to collapse. The authors propose the first Optimal Token Baseline (OTB) that explicitly accounts for token heterogeneity, derived from first principles to enable fine-grained variance control of the advantage function. To circumvent costly gradient computations, they introduce a Logit-Gradient Proxy as an efficient forward-pass surrogate metric. With only four samples, their method matches the performance of conventional approaches using 32 samples, significantly enhancing training stability across diverse reasoning tasks while reducing token consumption by over 65%.
📝 Abstract
Reinforcement Learning (RL) for Large Language Models (LLMs) often suffers from training collapse in long-horizon tasks due to exploding gradient variance. To mitigate this, a baseline is commonly introduced for advantage computation; however, traditional value models remain difficult to optimize, and standard group-based baselines overlook sequence heterogeneity. Although classic optimal baseline theory can achieve global variance reduction, it neglects token heterogeneity and requires prohibitive gradient-based computation. In this work, we derive the Optimal Token Baseline (OTB) from first principles, proving that gradient updates should be weighted inversely to their cumulative gradient norm. To ensure efficiency, we propose the Logit-Gradient Proxy that approximates the gradient norm using only forward-pass probabilities. Our method achieves training stability and matches the performance of large group sizes ($N=32$) with only $N=4$, reducing token consumption by over 65% across single-turn and tool-integrated reasoning tasks.