LLaDA-VLA: Vision Language Diffusion Action Models

πŸ“… 2025-09-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limited action-generation capability of vision-language models (VLMs) in robotic manipulation tasks. To this end, it introduces diffusion-based vision-language models (d-VLMs) into robot policy learning for the first time, proposing LLaDA-VLAβ€”the first unified vision-language-diffusion-action model grounded in d-VLMs. Methodologically, it innovates with a localized special-token classification mechanism and a hierarchical structured decoding strategy, explicitly modeling temporal dependencies among actions while reducing the adaptation complexity from d-VLM latent spaces to robotic action spaces. Evaluated on simulated and real-world robotic grasping and placing tasks, LLaDA-VLA substantially outperforms existing state-of-the-art vision-language-action models. These results empirically validate the effectiveness and generalization potential of the diffusion paradigm for embodied intelligence policy learning.

Technology Category

Application Category

πŸ“ Abstract
The rapid progress of auto-regressive vision-language models (VLMs) has inspired growing interest in vision-language-action models (VLA) for robotic manipulation. Recently, masked diffusion models, a paradigm distinct from autoregressive models, have begun to demonstrate competitive performance in text generation and multimodal applications, leading to the development of a series of diffusion-based VLMs (d-VLMs). However, leveraging such models for robot policy learning remains largely unexplored. In this work, we present LLaDA-VLA, the first Vision-Language-Diffusion-Action model built upon pretrained d-VLMs for robotic manipulation. To effectively adapt d-VLMs to robotic domain, we introduce two key designs: (1) a localized special-token classification strategy that replaces full-vocabulary classification with special action token classification, reducing adaptation difficulty; (2) a hierarchical action-structured decoding strategy that decodes action sequences hierarchically considering the dependencies within and across actions. Extensive experiments demonstrate that LLaDA-VLA significantly outperforms state-of-the-art VLAs on both simulation and real-world robots.
Problem

Research questions and friction points this paper is trying to address.

Adapting diffusion vision-language models for robot policy learning
Reducing adaptation difficulty in robotic manipulation tasks
Hierarchically decoding action sequences with dependencies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts diffusion vision-language models for robotics
Uses localized special-token classification strategy
Implements hierarchical action-structured decoding approach
πŸ”Ž Similar Papers
No similar papers found.