🤖 AI Summary
Existing vision-language-action (VLA) models face two key bottlenecks: autoregressive decoders are constrained by fixed left-to-right token ordering, while continuous diffusion or flow-matching approaches require iterative sampling and lack seamless integration with vision-language model (VLM) backbones. This work proposes a discrete-diffusion-based VLA framework that quantizes continuous robot actions into discrete token sequences, jointly modeled by a single Transformer natively compatible with standard VLM interfaces. We introduce adaptive decoding order and a novel quadratic re-masking error-correction mechanism, enabling parallel decoding while preserving progressive refinement. Leveraging a discrete diffusion objective with cross-entropy loss, our method fully reuses pre-trained VLM priors for end-to-end, non-iterative action generation. Evaluated on LIBERO, SimplerEnv, and Bridge benchmarks, our approach achieves success rates of 96.3%, 71.2%, and 49.3%, respectively—substantially outperforming both autoregressive and continuous diffusion baselines.
📝 Abstract
Vision-Language-Action (VLA) models adapt large vision-language backbones to map images and instructions to robot actions. However, prevailing VLA decoders either generate actions autoregressively in a fixed left-to-right order or attach continuous diffusion or flow matching heads outside the backbone, demanding specialized training and iterative sampling that hinder a unified, scalable architecture. We present Discrete Diffusion VLA, a single-transformer policy that models discretized action chunks with discrete diffusion and is trained with the same cross-entropy objective as the VLM backbone. The design retains diffusion's progressive refinement paradigm while remaining natively compatible with the discrete token interface of VLMs. Our method achieves an adaptive decoding order that resolves easy action elements before harder ones and uses secondary remasking to revisit uncertain predictions across refinement rounds, which improves consistency and enables robust error correction. This unified decoder preserves pretrained vision language priors, supports parallel decoding, breaks the autoregressive bottleneck, and reduces the number of function evaluations. Discrete Diffusion VLA achieves 96.3% avg. SR on LIBERO, 71.2% visual matching on SimplerEnv Fractal and 49.3% overall on SimplerEnv Bridge, improving over both autoregressive and continuous diffusion baselines. These findings indicate that discrete-diffusion action decoder supports precise action modeling and consistent training, laying groundwork for scaling VLA to larger models and datasets.