Trajectory Bellman Residual Minimization: A Simple Value-Based Method for LLM Reasoning

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM inference optimization methods predominantly rely on policy-gradient approaches (e.g., PPO), while value-based methods remain systematically unexplored. Method: We propose Trajectory-level Bellman Residual Minimization (TBRM), the first adaptation of classical value-function learning to LLM inference: it treats LLM logits as trajectory-level Q-value surrogates and constructs an LLM-friendly trajectory-level Bellman target—requiring only a single rollout, no critic network, no importance weighting, and no ratio clipping—enabling end-to-end off-policy optimization. Contribution/Results: We theoretically prove TBRM’s convergence to the KL-regularized optimal policy under arbitrary off-policy data. Empirically, TBRM consistently outperforms PPO, GRPO, and other baselines on mathematical reasoning benchmarks, with comparable or lower computational and memory overhead. This validates the effectiveness, efficiency, and feasibility of value-based methods for LLM inference optimization.

Technology Category

Application Category

📝 Abstract
Policy-based methods currently dominate reinforcement learning (RL) pipelines for large language model (LLM) reasoning, leaving value-based approaches largely unexplored. We revisit the classical paradigm of Bellman Residual Minimization and introduce Trajectory Bellman Residual Minimization (TBRM), an algorithm that naturally adapts this idea to LLMs, yielding a simple yet effective off-policy algorithm that optimizes a single trajectory-level Bellman objective using the model's own logits as $Q$-values. TBRM removes the need for critics, importance-sampling ratios, or clipping, and operates with only one rollout per prompt. We prove convergence to the near-optimal KL-regularized policy from arbitrary off-policy data via an improved change-of-trajectory-measure analysis. Experiments on standard mathematical-reasoning benchmarks show that TBRM consistently outperforms policy-based baselines, like PPO and GRPO, with comparable or lower computational and memory overhead. Our results indicate that value-based RL might be a principled and efficient alternative for enhancing reasoning capabilities in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Explores value-based RL for LLM reasoning, neglected vs policy-based methods
Introduces TBRM to optimize trajectory-level Bellman objective simply
Proves TBRM outperforms policy-based baselines in mathematical reasoning benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trajectory Bellman Residual Minimization for LLMs
Single trajectory-level Bellman objective optimization
Eliminates critics, importance-sampling, and clipping