$V_0$: A Generalist Value Model for Any Policy at State Zero

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel architecture based on adaptive feature fusion and dynamic reasoning to address the limited generalization of existing methods in complex scenarios. By incorporating a multi-scale context-aware module and a learnable strategy for selecting inference paths, the approach significantly enhances model robustness and accuracy on out-of-distribution data. Extensive experiments demonstrate that the proposed method consistently outperforms state-of-the-art models across multiple benchmark datasets, exhibiting particularly strong adaptability under low-resource and high-noise conditions. Beyond advancing open-world learning, this study establishes a technical foundation for designing efficient and scalable intelligent systems.

Technology Category

Application Category

📝 Abstract
Policy gradient methods rely on a baseline to measure the relative advantage of an action, ensuring the model reinforces behaviors that outperform its current average capability. In the training of Large Language Models (LLMs) using Actor-Critic methods (e.g., PPO), this baseline is typically estimated by a Value Model (Critic) often as large as the policy model itself. However, as the policy continuously evolves, the value model requires expensive, synchronous incremental training to accurately track the shifting capabilities of the policy. To avoid this overhead, Group Relative Policy Optimization (GRPO) eliminates the coupled value model by using the average reward of a group of rollouts as the baseline; yet, this approach necessitates extensive sampling to maintain estimation stability. In this paper, we propose $V_0$, a Generalist Value Model capable of estimating the expected performance of any model on unseen prompts without requiring parameter updates. We reframe value estimation by treating the policy's dynamic capability as an explicit context input; specifically, we leverage a history of instruction-performance pairs to dynamically profile the model, departing from the traditional paradigm that relies on parameter fitting to perceive capability shifts. Focusing on value estimation at State Zero (i.e., the initial prompt, hence $V_0$), our model serves as a critical resource scheduler. During GRPO training, $V_0$ predicts success rates prior to rollout, allowing for efficient sampling budget allocation; during deployment, it functions as a router, dispatching instructions to the most cost-effective and suitable model. Empirical results demonstrate that $V_0$ significantly outperforms heuristic budget allocation and achieves a Pareto-optimal trade-off between performance and cost in LLM routing tasks.
Problem

Research questions and friction points this paper is trying to address.

Value Model
Policy Gradient
Large Language Models
Resource Allocation
Model Routing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalist Value Model
State Zero
Policy Evaluation
LLM Routing
Zero-shot Value Estimation
🔎 Similar Papers
No similar papers found.