🤖 AI Summary
This work addresses three key challenges in robot policy evaluation: poor generalization, insufficient safety verification, and low-fidelity simulation. We propose a generative evaluation framework grounded in the Veo video foundation model. Methodologically, it integrates action-conditioned modeling, multi-view consistent completion, and synthetic scene perturbations to enable joint assessment of bimanual manipulation policies under nominal distributions, out-of-distribution (OOD) conditions, and safety constraints. To our knowledge, this is the first systematic extension of large video models to full-spectrum robot policy evaluation—enabling physics- and semantics-aware red-teaming for safety violations and high-fidelity interactive scene editing. Evaluated across 1,600+ real-world experiments, the framework characterizes performance rankings, OOD generalization bottlenecks, and safety violation patterns for eight Gemini Robotics policies across five task categories, significantly enhancing the comprehensiveness and interpretability of policy evaluation.
📝 Abstract
Generative world models hold significant potential for simulating interactions with visuomotor policies in varied environments. Frontier video models can enable generation of realistic observations and environment interactions in a scalable and general manner. However, the use of video models in robotics has been limited primarily to in-distribution evaluations, i.e., scenarios that are similar to ones used to train the policy or fine-tune the base video model. In this report, we demonstrate that video models can be used for the entire spectrum of policy evaluation use cases in robotics: from assessing nominal performance to out-of-distribution (OOD) generalization, and probing physical and semantic safety. We introduce a generative evaluation system built upon a frontier video foundation model (Veo). The system is optimized to support robot action conditioning and multi-view consistency, while integrating generative image-editing and multi-view completion to synthesize realistic variations of real-world scenes along multiple axes of generalization. We demonstrate that the system preserves the base capabilities of the video model to enable accurate simulation of scenes that have been edited to include novel interaction objects, novel visual backgrounds, and novel distractor objects. This fidelity enables accurately predicting the relative performance of different policies in both nominal and OOD conditions, determining the relative impact of different axes of generalization on policy performance, and performing red teaming of policies to expose behaviors that violate physical or semantic safety constraints. We validate these capabilities through 1600+ real-world evaluations of eight Gemini Robotics policy checkpoints and five tasks for a bimanual manipulator.