Unified Vision-Language-Action Model

📅 2025-06-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models over-rely on static visual-language understanding from vision-language models (VLMs), neglecting temporal structure and causal dynamics in visual observations. To address this, we propose a unified multimodal autoregressive architecture that jointly discretizes vision, language, and action into a single token sequence—enabling end-to-end modeling of cross-modal dependencies—and integrates a video-based world model to explicitly capture causal spatiotemporal dynamics, thereby enhancing long-horizon policy generalization and transfer. Our method combines discrete multimodal tokenization, autoregressive sequence generation, and video-driven world model post-training. Evaluated on CALVIN, LIBERO, and Simplenv-Bridge benchmarks, it achieves new state-of-the-art performance: average success rate on LIBERO rises to 95.5%. Furthermore, we validate robustness and practicality on real-world robotic manipulation and autonomous driving tasks.

Technology Category

Application Category

📝 Abstract
Vision-language-action models (VLAs) have garnered significant attention for their potential in advancing robotic manipulation. However, previous approaches predominantly rely on the general comprehension capabilities of vision-language models (VLMs) to generate action signals, often overlooking the rich temporal and causal structure embedded in visual observations. In this paper, we present UniVLA, a unified and native multimodal VLA model that autoregressively models vision, language, and action signals as discrete token sequences. This formulation enables flexible multimodal tasks learning, particularly from large-scale video data. By incorporating world modeling during post-training, UniVLA captures causal dynamics from videos, facilitating effective transfer to downstream policy learning--especially for long-horizon tasks. Our approach sets new state-of-the-art results across several widely used simulation benchmarks, including CALVIN, LIBERO, and Simplenv-Bridge, significantly surpassing previous methods. For example, UniVLA achieves 95.5% average success rate on LIBERO benchmark, surpassing pi0-FAST's 85.5%. We further demonstrate its broad applicability on real-world ALOHA manipulation and autonomous driving.
Problem

Research questions and friction points this paper is trying to address.

Overcoming limitations in vision-language-action models for robotics
Modeling temporal and causal structures in visual observations
Enhancing policy learning for long-horizon robotic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified multimodal model for vision-language-action tasks
Autoregressive modeling of discrete token sequences
World modeling for capturing causal dynamics
🔎 Similar Papers
No similar papers found.