Dita: Scaling Diffusion Transformer for Generalist Vision-Language-Action Policy

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action models are constrained by compact action-head designs, limiting adaptability to heterogeneous action spaces. Method: We propose the first multimodal joint diffusion framework for general-purpose robotic policies, replacing discrete/continuous action classification with unified denoising generation of continuous action sequences via Transformer-based modeling. Our approach introduces an in-context conditional diffusion mechanism for fine-grained alignment between actions and raw visual tokens, explicitly models action increments and subtle environmental changes, and supports cross-form, multi-view, long-horizon tasks as well as scalable extension to heterogeneous action spaces. Contribution/Results: The framework achieves state-of-the-art performance on simulation benchmarks. With only ten rounds of third-person viewpoint fine-tuning, it demonstrates robust deployment in complex real-world scenarios. We open-source a lightweight, general-purpose baseline implementation.

Technology Category

Application Category

📝 Abstract
While recent vision-language-action models trained on diverse robot datasets exhibit promising generalization capabilities with limited in-domain data, their reliance on compact action heads to predict discretized or continuous actions constrains adaptability to heterogeneous action spaces. We present Dita, a scalable framework that leverages Transformer architectures to directly denoise continuous action sequences through a unified multimodal diffusion process. Departing from prior methods that condition denoising on fused embeddings via shallow networks, Dita employs in-context conditioning -- enabling fine-grained alignment between denoised actions and raw visual tokens from historical observations. This design explicitly models action deltas and environmental nuances. By scaling the diffusion action denoiser alongside the Transformer's scalability, Dita effectively integrates cross-embodiment datasets across diverse camera perspectives, observation scenes, tasks, and action spaces. Such synergy enhances robustness against various variances and facilitates the successful execution of long-horizon tasks. Evaluations across extensive benchmarks demonstrate state-of-the-art or comparative performance in simulation. Notably, Dita achieves robust real-world adaptation to environmental variances and complex long-horizon tasks through 10-shot finetuning, using only third-person camera inputs. The architecture establishes a versatile, lightweight and open-source baseline for generalist robot policy learning. Project Page: https://robodita.github.io.
Problem

Research questions and friction points this paper is trying to address.

Adapting to heterogeneous action spaces in vision-language-action models
Enhancing alignment between denoised actions and visual observations
Scaling diffusion action denoiser for cross-embodiment dataset integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Transformer for continuous action denoising
Uses in-context conditioning for fine-grained alignment
Scales diffusion action denoiser with Transformer scalability
🔎 Similar Papers
No similar papers found.
Zhi Hou
Zhi Hou
The University of Sydney
Computer VisionMachine Learning
T
Tianyi Zhang
College of Computer Science and Technology, Zhejiang University; Shanghai AI Lab
Yuwen Xiong
Yuwen Xiong
University of Toronto
Computer VisionDeep Learning
H
Haonan Duan
SenseTime Research
H
Hengjun Pu
MMLab, The Chinese University of Hong Kong; Shanghai AI Lab
R
Ronglei Tong
SenseTime Research
Chengyang Zhao
Chengyang Zhao
Carnegie Mellon University
RoboticsMachine Learning3D Computer Vision
Xizhou Zhu
Xizhou Zhu
Tsinghua University
Y
Yu Qiao
Shanghai AI Lab
Jifeng Dai
Jifeng Dai
Associate Professor of EE, Tsinghua University
computer visiondeep learning
Yuntao Chen
Yuntao Chen
Miromind
agentic aimultimodal modelcomputer vision