AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World

πŸ“… 2025-03-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the longstanding challenges of low evaluation efficiency, high human dependency, and poor reproducibility in assessing general-purpose robotic manipulation policies in the real world, this paper introduces AutoEvalβ€”the first fully autonomous, end-to-end system for real-world evaluation. AutoEval establishes a scalable, distributed scene-network paradigm, integrating vision-based success detection, closed-loop scene reset control, cluster-based job scheduling, and standardized multi-scene interfaces. Deployed on BridgeData/WidowX hardware platforms, it supports task queue submission and 7Γ—24 continuous operation with near-zero human intervention. Empirical evaluation demonstrates >98% accuracy in outcome assessment, achieving strong agreement with human annotations. Furthermore, the project open-sources multiple standardized benchmark scenarios, establishing a reproducible, large-scale, and cost-effective real-world evaluation infrastructure for robotic learning.

Technology Category

Application Category

πŸ“ Abstract
Scalable and reproducible policy evaluation has been a long-standing challenge in robot learning. Evaluations are critical to assess progress and build better policies, but evaluation in the real world, especially at a scale that would provide statistically reliable results, is costly in terms of human time and hard to obtain. Evaluation of increasingly generalist robot policies requires an increasingly diverse repertoire of evaluation environments, making the evaluation bottleneck even more pronounced. To make real-world evaluation of robotic policies more practical, we propose AutoEval, a system to autonomously evaluate generalist robot policies around the clock with minimal human intervention. Users interact with AutoEval by submitting evaluation jobs to the AutoEval queue, much like how software jobs are submitted with a cluster scheduling system, and AutoEval will schedule the policies for evaluation within a framework supplying automatic success detection and automatic scene resets. We show that AutoEval can nearly fully eliminate human involvement in the evaluation process, permitting around the clock evaluations, and the evaluation results correspond closely to ground truth evaluations conducted by hand. To facilitate the evaluation of generalist policies in the robotics community, we provide public access to multiple AutoEval scenes in the popular BridgeData robot setup with WidowX robot arms. In the future, we hope that AutoEval scenes can be set up across institutions to form a diverse and distributed evaluation network.
Problem

Research questions and friction points this paper is trying to address.

Scalable real-world robot policy evaluation is costly
Generalist policies need diverse evaluation environments
AutoEval enables autonomous evaluation with minimal human intervention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Autonomous evaluation system for robot policies
Automatic success detection and scene resets
Distributed evaluation network across institutions
πŸ”Ž Similar Papers
No similar papers found.