RobotArena $infty$: Scalable Robot Benchmarking via Real-to-Sim Translation

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world robotic policy evaluation suffers from labor intensity, inefficiency, safety risks, and irreproducibility, while existing simulation benchmarks suffer from poor generalization due to train-test domain alignment. This paper introduces the first scalable evaluation framework integrating real-to-sim translation and online human preference feedback. Leveraging vision-language models for 2D→3D generation and differentiable rendering, it automatically constructs high-fidelity digital twin scenes from real manipulation videos. The framework enables cross-domain policy evaluation, robustness validation under diverse perturbations, and automated scoring. Human annotators provide only lightweight pairwise comparisons for preference labeling, drastically reducing annotation overhead. For the first time, it enables large-scale, safe, reproducible, and standardized evaluation of real-world-trained policies—establishing a unified benchmarking paradigm for generalist robotic foundation models.

Technology Category

Application Category

📝 Abstract
The pursuit of robot generalists - instructable agents capable of performing diverse tasks across diverse environments - demands rigorous and scalable evaluation. Yet real-world testing of robot policies remains fundamentally constrained: it is labor-intensive, slow, unsafe at scale, and difficult to reproduce. Existing simulation benchmarks are similarly limited, as they train and test policies within the same synthetic domains and cannot assess models trained from real-world demonstrations or alternative simulation environments. As policies expand in scope and complexity, these barriers only intensify, since defining "success" in robotics often hinges on nuanced human judgments of execution quality. In this paper, we introduce a new benchmarking framework that overcomes these challenges by shifting VLA evaluation into large-scale simulated environments augmented with online human feedback. Leveraging advances in vision-language models, 2D-to-3D generative modeling, and differentiable rendering, our approach automatically converts video demonstrations from widely used robot datasets into simulated counterparts. Within these digital twins, we assess VLA policies using both automated VLM-guided scoring and scalable human preference judgments collected from crowdworkers, transforming human involvement from tedious scene setup, resetting, and safety supervision into lightweight preference comparisons. To measure robustness, we systematically perturb simulated environments along multiple axes, such as textures and object placements, stress-testing policy generalization under controlled variation. The result is a continuously evolving, reproducible, and scalable benchmark for real-world trained robot manipulation policies, addressing a critical missing capability in today's robotics landscape.
Problem

Research questions and friction points this paper is trying to address.

Scaling robot benchmarking beyond physical testing limitations
Evaluating policies across real-world and simulated environments consistently
Assessing nuanced execution quality through scalable human feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-to-sim translation converts video demos to simulation
Automated VLM scoring and human preference assess policies
Systematic environmental perturbations test policy robustness
🔎 Similar Papers
No similar papers found.