🤖 AI Summary
Vision-language-action (VLA) models typically rely on scaling data and parameters to improve performance, but this is costly and constrained by the scarcity of embodied robotics data. Method: We propose RoVer, a test-time scaling framework for embodied agents that requires no architectural modifications or weight updates. RoVer introduces (1) a process reward model that jointly evaluates scalar rewards and predicts action directions—serving as an execution-time validator—and (2) a shared perceptual cache enabling efficient multi-candidate action generation and direction-guided sampling. Contribution/Results: Without incurring additional training cost, RoVer directly converts test-time compute into improved decision quality. Experiments demonstrate substantial gains in action selection accuracy under fixed computational budgets, providing the first systematic evidence that test-time scaling is both effective and scalable for embodied decision-making.
📝 Abstract
Vision-Language-Action (VLA) models have become a prominent paradigm for embodied intelligence, yet further performance improvements typically rely on scaling up training data and model size -- an approach that is prohibitively expensive for robotics and fundamentally limited by data collection costs.We address this limitation with $mathbf{RoVer}$, an embodied test-time scaling framework that uses a $mathbf{Ro}$bot Process Reward Model (PRM) as a Test-Time $mathbf{Ver}$ifier to enhance the capabilities of existing VLA models without modifying their architectures or weights. Specifically, RoVer (i) assigns scalar-based process rewards to evaluate the reliability of candidate actions, and (ii) predicts an action-space direction for candidate expansion/refinement. During inference, RoVer generates multiple candidate actions concurrently from the base policy, expands them along PRM-predicted directions, and then scores all candidates with PRM to select the optimal action for execution. Notably, by caching shared perception features, it can amortize perception cost and evaluate more candidates under the same test-time computational budget. Essentially, our approach effectively transforms available computing resources into better action decision-making, realizing the benefits of test-time scaling without extra training overhead. Our contributions are threefold: (1) a general, plug-and-play test-time scaling framework for VLAs; (2) a PRM that jointly provides scalar process rewards and an action-space direction to guide exploration; and (3) an efficient direction-guided sampling strategy that leverages a shared perception cache to enable scalable candidate generation and selection during inference.