🤖 AI Summary
To address the weak generalization and poor robustness of robotic visuomotor policies in complex tasks, this paper proposes the first diffusion-autoregressive collaborative framework: an autoregressive language model enables cross-modal reasoning and instruction following, while a conditional diffusion model generates robust actions. We introduce a novel reasoning injection module that explicitly embeds reasoning phrases into the policy, enhancing interpretability and zero-shot generalization. The method integrates multimodal prompt encoding, a Transformer backbone, and a denoising U-Net in joint training, enabling few-shot learning (<50 demonstrations) and rapid adaptation across robot morphologies. Experiments demonstrate real-time robotic control at 82 Hz, 63.7% zero-shot accuracy on industrial sorting tasks, and strong robustness against distractors, unseen backgrounds, and heterogeneous robotic arms.
📝 Abstract
In this paper, we present DiffusionVLA, a novel framework that seamlessly combines the autoregression model with the diffusion model for learning visuomotor policy. Central to our approach is a next-token prediction objective, enabling the model to reason effectively over the user's query in the context of current observations. Subsequently, a diffusion model is attached to generate robust action outputs. To enhance policy learning through self-reasoning, we introduce a novel reasoning injection module that integrates reasoning phrases directly into the policy learning process. The whole framework is simple and flexible, making it easy to deploy and upgrade. We conduct extensive experiments using multiple real robots to validate the effectiveness of DiffusionVLA. Our tests include a challenging factory sorting task, where DiffusionVLA successfully categorizes objects, including those not seen during training. We observe that the reasoning module makes the model interpretable. It allows observers to understand the model thought process and identify potential causes of policy failures. Additionally, we test DiffusionVLA on a zero-shot bin-picking task, achieving 63.7% accuracy on 102 previously unseen objects. Our method demonstrates robustness to visual changes, such as distractors and new backgrounds, and easily adapts to new embodiments. Furthermore, DiffusionVLA can follow novel instructions and retain conversational ability. Notably, DiffusionVLA is data-efficient and fast at inference; our smallest DiffusionVLA-2B runs 82Hz on a single A6000 GPU and can train from scratch on less than 50 demonstrations for a complex task. Finally, we scale the model from 2B to 72B parameters, showcasing improved generalization capabilities with increased model size.