🤖 AI Summary
A systematic, cross-model empirical evaluation of vision-language-action (VLA) foundation models in real-world robotic manipulation remains lacking.
Method: We introduce the first standardized benchmarking framework covering both simulation and the ALOHA real-world mobile platform, evaluating four leading VLA models—ACT, OpenVLA-OFT, RDT-1B, and π₀—along four axes: task accuracy, out-of-distribution generalization, instruction following fidelity, and deployment cost.
Contribution/Results: We propose a novel multi-dimensional trade-off analysis, uncovering fundamental tensions among generalization (π₀ excels), stability (ACT dominates), and computational overhead. We identify recurring failure modes—including near-miss grasping, premature release, and long-horizon state drift—and quantify data-scaling laws. Our reproducible evaluation provides empirically grounded guidance for VLA model selection and practical deployment in robotics.
📝 Abstract
Foundation models applied in robotics, particularly extbf{Vision--Language--Action (VLA)} models, hold great promise for achieving general-purpose manipulation. Yet, systematic real-world evaluations and cross-model comparisons remain scarce. This paper reports our extbf{empirical experiences} from benchmarking four representative VLAs -- extbf{ACT}, extbf{OpenVLA--OFT}, extbf{RDT-1B}, and oldmath{$pi_0$} -- across four manipulation tasks conducted in both simulation and on the extbf{ALOHA Mobile} platform. We establish a extbf{standardized evaluation framework} that measures performance along three key dimensions: (1) extit{accuracy and efficiency} (success rate and time-to-success), (2) extit{adaptability} across in-distribution, spatial out-of-distribution, and instance-plus-spatial out-of-distribution settings, and (3) extit{language instruction-following accuracy}. Through this process, we observe that oldmath{$pi_0$} demonstrates superior adaptability in out-of-distribution scenarios, while extbf{ACT} provides the highest stability in-distribution. Further analysis highlights differences in computational demands, data-scaling behavior, and recurring failure modes such as near-miss grasps, premature releases, and long-horizon state drift. These findings reveal practical trade-offs among VLA model architectures in balancing precision, generalization, and deployment cost, offering actionable insights for selecting and deploying VLAs in real-world robotic manipulation tasks.