RoVer: Robot Reward Model as Test-Time Verifier for Vision-Language-Action Model

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language-action (VLA) models typically rely on scaling data and parameters to improve performance, but this is costly and constrained by the scarcity of embodied robotics data. Method: We propose RoVer, a test-time scaling framework for embodied agents that requires no architectural modifications or weight updates. RoVer introduces (1) a process reward model that jointly evaluates scalar rewards and predicts action directions—serving as an execution-time validator—and (2) a shared perceptual cache enabling efficient multi-candidate action generation and direction-guided sampling. Contribution/Results: Without incurring additional training cost, RoVer directly converts test-time compute into improved decision quality. Experiments demonstrate substantial gains in action selection accuracy under fixed computational budgets, providing the first systematic evidence that test-time scaling is both effective and scalable for embodied decision-making.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have become a prominent paradigm for embodied intelligence, yet further performance improvements typically rely on scaling up training data and model size -- an approach that is prohibitively expensive for robotics and fundamentally limited by data collection costs.We address this limitation with $mathbf{RoVer}$, an embodied test-time scaling framework that uses a $mathbf{Ro}$bot Process Reward Model (PRM) as a Test-Time $mathbf{Ver}$ifier to enhance the capabilities of existing VLA models without modifying their architectures or weights. Specifically, RoVer (i) assigns scalar-based process rewards to evaluate the reliability of candidate actions, and (ii) predicts an action-space direction for candidate expansion/refinement. During inference, RoVer generates multiple candidate actions concurrently from the base policy, expands them along PRM-predicted directions, and then scores all candidates with PRM to select the optimal action for execution. Notably, by caching shared perception features, it can amortize perception cost and evaluate more candidates under the same test-time computational budget. Essentially, our approach effectively transforms available computing resources into better action decision-making, realizing the benefits of test-time scaling without extra training overhead. Our contributions are threefold: (1) a general, plug-and-play test-time scaling framework for VLAs; (2) a PRM that jointly provides scalar process rewards and an action-space direction to guide exploration; and (3) an efficient direction-guided sampling strategy that leverages a shared perception cache to enable scalable candidate generation and selection during inference.
Problem

Research questions and friction points this paper is trying to address.

Enhancing Vision-Language-Action models without architectural changes or retraining
Using robot process rewards to verify and score candidate actions at test-time
Improving action decision-making by leveraging computational resources during inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses robot process reward model as verifier
Generates multiple candidate actions for refinement
Leverages shared perception cache for efficiency
🔎 Similar Papers
No similar papers found.
M
Mingtong Dai
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
L
Lingbo Liu
Peng Cheng Laboratory
Yongjie Bai
Yongjie Bai
School of Computer Science and Engineering, Sun Yat-sen University; Peng Cheng Laboratory
Embodied AIRobotic ManipulationRobot Learning
Y
Yang Liu
School of Computer Science and Engineering, Sun Yat-sen University
Zhouxia Wang
Zhouxia Wang
The University of Hong Kong
computer visionmachine learning
Rui Su
Rui Su
University of Sydney
Action DetectionVisual Grounding
C
Chunjie Chen
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
Liang Lin
Liang Lin
Fellow of IEEE/IAPR, Professor of Computer Science, Sun Yat-sen University
Embodied AICausal Inference and LearningMultimodal Data Analysis
X
Xinyu Wu
Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences