π€ AI Summary
This work addresses the limited generalization of existing vision-language-action models across diverse robot morphologies and data-scarce scenarios. The authors propose a human-centric cross-embodiment learning paradigm that treats human interaction actions as a universal βlingua franca,β enabling seamless transfer from human demonstrations to robot execution. By mapping heterogeneous robot controls into a unified action space grounded in human motion, the approach integrates sequence modeling with multi-task pretraining. Key innovations include a Mixture-of-Flow architecture, a manifold-preserving gating mechanism, and a universal asynchronous chunking strategy. Evaluated on the UniHand-2.0 multimodal pretraining dataset, the model achieves state-of-the-art performance on the LIBERO (98.9%) and RoboCasa (53.9%) simulation benchmarks and demonstrates exceptional cross-embodiment generalization across five real-world robotic platforms.
π Abstract
We introduce Being-H0.5, a foundational Vision-Language-Action (VLA) model designed for robust cross-embodiment generalization across diverse robotic platforms. While existing VLAs often struggle with morphological heterogeneity and data scarcity, we propose a human-centric learning paradigm that treats human interaction traces as a universal"mother tongue"for physical interaction. To support this, we present UniHand-2.0, the largest embodied pre-training recipe to date, comprising over 35,000 hours of multimodal data across 30 distinct robotic embodiments. Our approach introduces a Unified Action Space that maps heterogeneous robot controls into semantically aligned slots, enabling low-resource robots to bootstrap skills from human data and high-resource platforms. Built upon this human-centric foundation, we design a unified sequential modeling and multi-task pre-training paradigm to bridge human demonstrations and robotic execution. Architecturally, Being-H0.5 utilizes a Mixture-of-Transformers design featuring a novel Mixture-of-Flow (MoF) framework to decouple shared motor primitives from specialized embodiment-specific experts. Finally, to make cross-embodiment policies stable in the real world, we introduce Manifold-Preserving Gating for robustness under sensory shift and Universal Async Chunking to universalize chunked control across embodiments with different latency and control profiles. We empirically demonstrate that Being-H0.5 achieves state-of-the-art results on simulated benchmarks, such as LIBERO (98.9%) and RoboCasa (53.9%), while also exhibiting strong cross-embodiment capabilities on five robotic platforms.