Discrete Diffusion VLA: Bringing Discrete Diffusion to Action Decoding in Vision-Language-Action Policies

📅 2025-08-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models face two key bottlenecks: autoregressive decoders are constrained by fixed left-to-right token ordering, while continuous diffusion or flow-matching approaches require iterative sampling and lack seamless integration with vision-language model (VLM) backbones. This work proposes a discrete-diffusion-based VLA framework that quantizes continuous robot actions into discrete token sequences, jointly modeled by a single Transformer natively compatible with standard VLM interfaces. We introduce adaptive decoding order and a novel quadratic re-masking error-correction mechanism, enabling parallel decoding while preserving progressive refinement. Leveraging a discrete diffusion objective with cross-entropy loss, our method fully reuses pre-trained VLM priors for end-to-end, non-iterative action generation. Evaluated on LIBERO, SimplerEnv, and Bridge benchmarks, our approach achieves success rates of 96.3%, 71.2%, and 49.3%, respectively—substantially outperforming both autoregressive and continuous diffusion baselines.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models adapt large vision-language backbones to map images and instructions to robot actions. However, prevailing VLA decoders either generate actions autoregressively in a fixed left-to-right order or attach continuous diffusion or flow matching heads outside the backbone, demanding specialized training and iterative sampling that hinder a unified, scalable architecture. We present Discrete Diffusion VLA, a single-transformer policy that models discretized action chunks with discrete diffusion and is trained with the same cross-entropy objective as the VLM backbone. The design retains diffusion's progressive refinement paradigm while remaining natively compatible with the discrete token interface of VLMs. Our method achieves an adaptive decoding order that resolves easy action elements before harder ones and uses secondary remasking to revisit uncertain predictions across refinement rounds, which improves consistency and enables robust error correction. This unified decoder preserves pretrained vision language priors, supports parallel decoding, breaks the autoregressive bottleneck, and reduces the number of function evaluations. Discrete Diffusion VLA achieves 96.3% avg. SR on LIBERO, 71.2% visual matching on SimplerEnv Fractal and 49.3% overall on SimplerEnv Bridge, improving over both autoregressive and continuous diffusion baselines. These findings indicate that discrete-diffusion action decoder supports precise action modeling and consistent training, laying groundwork for scaling VLA to larger models and datasets.
Problem

Research questions and friction points this paper is trying to address.

Modeling robot actions with discrete diffusion for vision-language-action policies
Unifying action decoding with vision-language backbones using cross-entropy training
Enabling adaptive decoding order and robust error correction in VLA models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discrete diffusion models discretized action chunks
Unified transformer with cross-entropy training objective
Adaptive decoding order with secondary remasking
🔎 Similar Papers
No similar papers found.
Zhixuan Liang
Zhixuan Liang
University of Hong Kong
Embodied AIMachine LearningRoboticsComputer Vision
Yizhuo Li
Yizhuo Li
The University of Hong Kong
T
Tianshuo Yang
The University of Hong Kong, Shanghai AI Laboratory
C
Chengyue Wu
The University of Hong Kong
Sitong Mao
Sitong Mao
Huawei, The Hong Kong Polytechnic University
CVMulti-modalityEmbodied AITransfer learning
L
Liuao Pei
The University of Hong Kong, Shanghai AI Laboratory
X
Xiaokang Yang
Shanghai Jiao Tong University
J
Jiangmiao Pang
Shanghai AI Laboratory
Y
Yao Mu
Shanghai Jiao Tong University, Shanghai AI Laboratory
Ping Luo
Ping Luo
National University of Defense Technology
distributed_computing