๐ค AI Summary
To address weak action representation, low training efficiency, and poor generalization of vision-language-action (VLA) models in long-horizon, complex robotic tasks across heterogeneous morphologies, this paper proposes a decoupled Diffusion Action Expert (DAE) framework synergized with a Vision-Language Model (VLM). We introduce the first billion-parameter plug-and-play DAE, directly controllable via natural language prompts, and design a cross-morphology curriculum learning paradigm comprising pretraining, alignment, and post-training stages. Our method requires no task-specific fine-tuning and achieves zero-shot execution of intricate manipulation tasksโe.g., cloth foldingโon diverse robotic platforms including single-arm, dual-arm, and dexterous-hand systems. It significantly outperforms state-of-the-art methods such as Octo and OpenVLA, delivering breakthroughs in generalization across morphologies, action precision, and training efficiency.
๐ Abstract
Enabling robots to perform diverse tasks across varied environments is a central challenge in robot learning. While vision-language-action (VLA) models have shown promise for generalizable robot skills, realizing their full potential requires addressing limitations in action representation and efficient training. Current VLA models often focus on scaling the vision-language model (VLM) component, while the action space representation remains a critical bottleneck. This paper introduces DexVLA, a novel framework designed to enhance the efficiency and generalization capabilities of VLAs for complex, long-horizon tasks across diverse robot embodiments. DexVLA features a novel diffusion-based action expert, scaled to one billion parameters, designed for cross-embodiment learning. A novel embodiment curriculum learning strategy facilitates efficient training: (1) pre-training the diffusion expert that is separable from the VLA on cross-embodiment data, (2) aligning the VLA model to specific embodiments, and (3) post-training for rapid adaptation to new tasks. We conduct comprehensive experiments across multiple embodiments, including single-arm, bimanual, and dexterous hand, demonstrating DexVLA's adaptability to challenging tasks without task-specific adaptation, its ability to learn dexterous skills on novel embodiments with limited data, and its capacity to complete complex, long-horizon tasks using only direct language prompting, such as laundry folding. In all settings, our method demonstrates superior performance compared to state-of-the-art models like Octo, OpenVLA, and Diffusion Policy.