MiVLA: Towards Generalizable Vision-Language-Action Model with Human-Robot Mutual Imitation Pre-training

📅 2025-12-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language-action (VLA) models suffer from limited cross-morphology generalization due to domain gaps between human demonstration videos and robot execution data—particularly in viewpoint, appearance, and kinematic structure. To address this, we propose a bidirectional behavioral imitation pretraining framework featuring a novel kinematics-aware left-right hand-arm coordinate alignment mechanism, enabling behavior-level mutual imitation and knowledge fusion between human demonstrations and robotic actions. Our method integrates multi-view visual encoding, cross-modal action trajectory prediction, coordinate-transformation-driven embodied action-space alignment, and contrastive imitation learning. We evaluate on three distinct robotic platforms—ARX, PiPer, and LocoMan—demonstrating 25% improvement in simulation task generalization and 14% gain in real-robot control performance over state-of-the-art methods including π₀, π₀.₅, and H-RDT.

Technology Category

Application Category

📝 Abstract
While leveraging abundant human videos and simulated robot data poses a scalable solution to the scarcity of real-world robot data, the generalization capability of existing vision-language-action models (VLAs) remains limited by mismatches in camera views, visual appearance, and embodiment morphologies. To overcome this limitation, we propose MiVLA, a generalizable VLA empowered by human-robot mutual imitation pre-training, which leverages inherent behavioral similarity between human hands and robotic arms to build a foundation of strong behavioral priors for both human actions and robotic control. Specifically, our method utilizes kinematic rules with left/right hand coordinate systems for bidirectional alignment between human and robot action spaces. Given human or simulated robot demonstrations, MiVLA is trained to forecast behavior trajectories for one embodiment, and imitate behaviors for another one unseen in the demonstration. Based on this mutual imitation, it integrates the behavioral fidelity of real-world human data with the manipulative diversity of simulated robot data into a unified model, thereby enhancing the generalization capability for downstream tasks. Extensive experiments conducted on both simulation and real-world platforms with three robots (ARX, PiPer and LocoMan), demonstrate that MiVLA achieves strong improved generalization capability, outperforming state-of-the-art VLAs (e.g., $oldsymbolπ_{0}$, $oldsymbolπ_{0.5}$ and H-RDT) by 25% in simulation, and 14% in real-world robot control tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses generalization limitations in vision-language-action models
Overcomes mismatches in camera views, appearance, and robot morphology
Enhances robot control generalization using human-robot mutual imitation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mutual imitation pre-training aligns human and robot action spaces
Kinematic rules with hand coordinate systems enable bidirectional alignment
Forecasting and imitation integrate human and simulated robot data
🔎 Similar Papers
No similar papers found.
Z
Zhenhan Yin
Tongji University
Xuanhan Wang
Xuanhan Wang
UESTC
Human Centered Visual Understanding
J
Jiahao Jiang
Tongji University
K
Kaiyuan Deng
University of Electronic Science and Technology of China
P
Pengqi Chen
University of Electronic Science and Technology of China
S
Shuangle Li
University of Electronic Science and Technology of China
C
Chong Liu
University of Electronic Science and Technology of China
X
Xing Xu
Tongji University
J
Jingkuan Song
Tongji University
Lianli Gao
Lianli Gao
UESTC
Vision and Language
H
Heng Tao Shen
Tongji University