RoboMonkey: Scaling Test-Time Sampling and Verification for Vision-Language-Action Models

๐Ÿ“… 2025-06-21
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the limited robustness and generalization of vision-language-action (VLA) models in unstructured real-world environments, this paper proposes a test-time scaling framework. It generates diverse action proposals via Gaussian perturbations, constructs an action distribution through majority voting, and employs a vision-language model (VLM)โ€”trained on synthetic dataโ€”as an action verifier to select the optimal action. A key contribution is the empirical discovery that action error decays with sample count following an exponential power law, enabling principled co-scaling of sampling and verification. Evaluated on out-of-distribution tasks, the method achieves a +25% absolute performance gain; on in-distribution tasks, it improves by +8%. Further joint fine-tuning with novel robot configurations yields an additional +7% performance boost.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-Language-Action (VLA) models have demonstrated remarkable capabilities in visuomotor control, yet ensuring their robustness in unstructured real-world environments remains a persistent challenge. In this paper, we investigate test-time scaling through the lens of sampling and verification as means to enhance the robustness and generalization of VLAs. We first demonstrate that the relationship between action error and the number of generated samples follows an exponentiated power law across a range of VLAs, indicating the existence of inference-time scaling laws. Building on these insights, we introduce RoboMonkey, a test-time scaling framework for VLAs. At deployment, RoboMonkey samples a small set of actions from a VLA, applies Gaussian perturbation and majority voting to construct an action proposal distribution, and then uses a Vision Language Model (VLM)-based verifier to select the optimal action. We propose a synthetic data generation pipeline for training such VLM-based action verifiers, and demonstrate that scaling the synthetic dataset consistently improves verification and downstream accuracy. Through extensive simulated and hardware experiments, we show that pairing existing VLAs with RoboMonkey yields significant performance gains, achieving a 25% absolute improvement on out-of-distribution tasks and 8% on in-distribution tasks. Additionally, when adapting to new robot setups, we show that fine-tuning both VLAs and action verifiers yields a 7% performance increase compared to fine-tuning VLAs alone.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness of Vision-Language-Action models in real-world environments
Scaling test-time sampling and verification for improved model generalization
Optimizing action selection using synthetic data and VLM-based verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-time scaling with sampling and verification
Gaussian perturbation and majority voting
Synthetic data-trained VLM-based verifier