🤖 AI Summary
This work investigates how to learn action-agnostic latent representations from large-scale unlabeled human motion videos and leverage them for vision-to-action generalization in robotic control. To this end, we establish a unified evaluation framework that integrates over one million videos, images, and robot trajectory data, enabling the first systematic assessment of general-purpose vision foundation models on physical control tasks under zero action supervision. Our experiments demonstrate that such models significantly outperform specialized embodied models. Crucially, we find that semantically abstracted latent action spaces align more closely with the true distribution of physical actions than pixel-level representations, thereby enabling more effective cross-task and cross-domain vision-to-action mapping.
📝 Abstract
While the shortage of explicit action data limits Vision-Language-Action (VLA) models, human action videos offer a scalable yet unlabeled data source. A critical challenge in utilizing large-scale human video datasets lies in transforming visual signals into ontology-independent representations, known as latent actions. However, the capacity of latent action representation to derive robust control from visual observations has yet to be rigorously evaluated. We introduce the Latent Action Representation Yielding (LARY) Benchmark, a unified framework for evaluating latent action representations on both high-level semantic actions (what to do) and low-level robotic control (how to do). The comprehensively curated dataset encompasses over one million videos (1,000 hours) spanning 151 action categories, alongside 620K image pairs and 595K motion trajectories across diverse embodiments and environments. Our experiments reveal two crucial insights: (i) General visual foundation models, trained without any action supervision, consistently outperform specialized embodied latent action models. (ii) Latent-based visual space is fundamentally better aligned to physical action space than pixel-based space. These results suggest that general visual representations inherently encode action-relevant knowledge for physical control, and that semantic-level abstraction serves as a fundamentally more effective pathway from vision to action than pixel-level reconstruction.