Pure Vision Language Action (VLA) Models: A Comprehensive Survey

📅 2025-09-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models lack a unified conceptual framework, hindering systematic understanding of how VLA enables robots to transition from passive perception to active decision-making and physical interaction. Method: We propose a multidimensional taxonomy—spanning autoregressive, diffusion-based, reinforcement learning, and hybrid paradigms—and conduct an integrative analysis of 300+ state-of-the-art studies to characterize modeling motivations, technical pathways, and applicability boundaries for each paradigm. Contribution/Results: This work delivers the first comprehensive VLA survey, encompassing benchmark tasks, simulation platforms, and open-source datasets. It identifies core challenges—including scalability, embodied generalization, and world modeling—and charts an evolutionary trajectory toward general-purpose embodied agents. By synthesizing theoretical foundations and empirical advances, our survey provides a structured knowledge graph and methodological foundation for both VLA research and embodied intelligence development.

Technology Category

Application Category

📝 Abstract
The emergence of Vision Language Action (VLA) models marks a paradigm shift from traditional policy-based control to generalized robotics, reframing Vision Language Models (VLMs) from passive sequence generators into active agents for manipulation and decision-making in complex, dynamic environments. This survey delves into advanced VLA methods, aiming to provide a clear taxonomy and a systematic, comprehensive review of existing research. It presents a comprehensive analysis of VLA applications across different scenarios and classifies VLA approaches into several paradigms: autoregression-based, diffusion-based, reinforcement-based, hybrid, and specialized methods; while examining their motivations, core strategies, and implementations in detail. In addition, foundational datasets, benchmarks, and simulation platforms are introduced. Building on the current VLA landscape, the review further proposes perspectives on key challenges and future directions to advance research in VLA models and generalizable robotics. By synthesizing insights from over three hundred recent studies, this survey maps the contours of this rapidly evolving field and highlights the opportunities and challenges that will shape the development of scalable, general-purpose VLA methods.
Problem

Research questions and friction points this paper is trying to address.

Surveying Vision Language Action models for robotics control in dynamic environments
Classifying VLA approaches into different paradigms and analyzing their implementations
Identifying key challenges and future directions for generalizable robotics research
Innovation

Methods, ideas, or system contributions that make the work stand out.

VLA models transform passive VLMs into active agents
Classifies VLA approaches into autoregression, diffusion, and hybrid methods
Analyzes applications, datasets, and future challenges for generalizable robotics
🔎 Similar Papers
2024-06-09Annual Meeting of the Association for Computational LinguisticsCitations: 13
D
Dapeng Zhang
School of Information Science and Engineering, Lanzhou University, China
Jin Sun
Jin Sun
Assistant Professor, University of Georgia
Computer Vision
C
Chenghui Hu
School of Information Science and Engineering, Lanzhou University, China
X
Xiaoyan Wu
School of Information Science and Engineering, Lanzhou University, China
Z
Zhenlong Yuan
Institute of Computing Technology, Chinese Academy of Sciences, China
R
Rui Zhou
School of Information Science and Engineering, Lanzhou University, China
Fei Shen
Fei Shen
National University of Singapore
Controllable GenerationMultimodal Safety
Qingguo Zhou
Qingguo Zhou
Lanzhou University