Diffusion-VLA: Scaling Robot Foundation Models via Unified Diffusion and Autoregression

📅 2024-12-04
📈 Citations: 13
Influential: 1
📄 PDF
🤖 AI Summary
To address the weak generalization and poor robustness of robotic visuomotor policies in complex tasks, this paper proposes the first diffusion-autoregressive collaborative framework: an autoregressive language model enables cross-modal reasoning and instruction following, while a conditional diffusion model generates robust actions. We introduce a novel reasoning injection module that explicitly embeds reasoning phrases into the policy, enhancing interpretability and zero-shot generalization. The method integrates multimodal prompt encoding, a Transformer backbone, and a denoising U-Net in joint training, enabling few-shot learning (<50 demonstrations) and rapid adaptation across robot morphologies. Experiments demonstrate real-time robotic control at 82 Hz, 63.7% zero-shot accuracy on industrial sorting tasks, and strong robustness against distractors, unseen backgrounds, and heterogeneous robotic arms.

Technology Category

Application Category

📝 Abstract
In this paper, we present DiffusionVLA, a novel framework that seamlessly combines the autoregression model with the diffusion model for learning visuomotor policy. Central to our approach is a next-token prediction objective, enabling the model to reason effectively over the user's query in the context of current observations. Subsequently, a diffusion model is attached to generate robust action outputs. To enhance policy learning through self-reasoning, we introduce a novel reasoning injection module that integrates reasoning phrases directly into the policy learning process. The whole framework is simple and flexible, making it easy to deploy and upgrade. We conduct extensive experiments using multiple real robots to validate the effectiveness of DiffusionVLA. Our tests include a challenging factory sorting task, where DiffusionVLA successfully categorizes objects, including those not seen during training. We observe that the reasoning module makes the model interpretable. It allows observers to understand the model thought process and identify potential causes of policy failures. Additionally, we test DiffusionVLA on a zero-shot bin-picking task, achieving 63.7% accuracy on 102 previously unseen objects. Our method demonstrates robustness to visual changes, such as distractors and new backgrounds, and easily adapts to new embodiments. Furthermore, DiffusionVLA can follow novel instructions and retain conversational ability. Notably, DiffusionVLA is data-efficient and fast at inference; our smallest DiffusionVLA-2B runs 82Hz on a single A6000 GPU and can train from scratch on less than 50 demonstrations for a complex task. Finally, we scale the model from 2B to 72B parameters, showcasing improved generalization capabilities with increased model size.
Problem

Research questions and friction points this paper is trying to address.

Combines autoregression and diffusion models for visuomotor policy learning
Enhances policy learning with self-reasoning via reasoning injection module
Scales robot foundation models for robust, interpretable, and adaptable performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines autoregression and diffusion models
Introduces reasoning injection module
Scales model size for better generalization
🔎 Similar Papers
No similar papers found.
Junjie Wen
Junjie Wen
East China Normal University, Midea Group
Minjie Zhu
Minjie Zhu
East China Normal University
MLLMrobotics
Y
Yichen Zhu
Midea Group
Z
Zhibin Tang
Midea Group
Jinming Li
Jinming Li
Shanghai University
Embodied IntellengienceRobotics
Z
Zhongyi Zhou
East China Normal University, Midea Group
C
Chengmeng Li
Midea Group, Shanghai University
X
Xiaoyu Liu
Midea Group, Shanghai University
Y
Yaxin Peng
Shanghai University
Chaomin Shen
Chaomin Shen
Dept of Computer Science, East China Normal University
Image ProcessingMachine Learning
Feifei Feng
Feifei Feng
Midea Group