The Optimal Token Baseline: Variance Reduction for Long-Horizon LLM-RL

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the training instability of long-horizon large language models in reinforcement learning, where gradient variance often explodes and leads to collapse. The authors propose the first Optimal Token Baseline (OTB) that explicitly accounts for token heterogeneity, derived from first principles to enable fine-grained variance control of the advantage function. To circumvent costly gradient computations, they introduce a Logit-Gradient Proxy as an efficient forward-pass surrogate metric. With only four samples, their method matches the performance of conventional approaches using 32 samples, significantly enhancing training stability across diverse reasoning tasks while reducing token consumption by over 65%.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning (RL) for Large Language Models (LLMs) often suffers from training collapse in long-horizon tasks due to exploding gradient variance. To mitigate this, a baseline is commonly introduced for advantage computation; however, traditional value models remain difficult to optimize, and standard group-based baselines overlook sequence heterogeneity. Although classic optimal baseline theory can achieve global variance reduction, it neglects token heterogeneity and requires prohibitive gradient-based computation. In this work, we derive the Optimal Token Baseline (OTB) from first principles, proving that gradient updates should be weighted inversely to their cumulative gradient norm. To ensure efficiency, we propose the Logit-Gradient Proxy that approximates the gradient norm using only forward-pass probabilities. Our method achieves training stability and matches the performance of large group sizes ($N=32$) with only $N=4$, reducing token consumption by over 65% across single-turn and tool-integrated reasoning tasks.
Problem

Research questions and friction points this paper is trying to address.

variance reduction
long-horizon RL
LLM-RL
training collapse
gradient variance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal Token Baseline
variance reduction
LLM reinforcement learning
gradient norm weighting
Logit-Gradient Proxy
🔎 Similar Papers
No similar papers found.