Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the weak generalization capability and high deployment barrier of Vision-Language-Action (VLA) models in real-world robotic systems. Methodologically, it introduces the first full-stack, application-oriented VLA survey framework, systematically analyzing strategy evolution, multimodal architecture design, learning paradigms—including LLM/VLM integration, data augmentation, and policy learning—as well as hardware integration and evaluation methodologies. It provides the first unified taxonomy and comparative analysis of training methods, benchmarks, modality combinations, and datasets. Key contributions include: (1) an open-source VLA literature database covering 120+ works—annotated with datasets, models, and platforms—hosted on a dedicated project website; and (2) a distilled set of critical bottlenecks and practical pathways for cross-task, cross-object, and cross-environment generalization, delivering a structured roadmap and reproducible benchmarks to bridge the gap between simulation-based VLA development and real-world robotic deployment.

Technology Category

Application Category

📝 Abstract
Amid growing efforts to leverage advances in large language models (LLMs) and vision-language models (VLMs) for robotics, Vision-Language-Action (VLA) models have recently gained significant attention. By unifying vision, language, and action data at scale, which have traditionally been studied separately, VLA models aim to learn policies that generalise across diverse tasks, objects, embodiments, and environments. This generalisation capability is expected to enable robots to solve novel downstream tasks with minimal or no additional task-specific data, facilitating more flexible and scalable real-world deployment. Unlike previous surveys that focus narrowly on action representations or high-level model architectures, this work offers a comprehensive, full-stack review, integrating both software and hardware components of VLA systems. In particular, this paper provides a systematic review of VLAs, covering their strategy and architectural transition, architectures and building blocks, modality-specific processing techniques, and learning paradigms. In addition, to support the deployment of VLAs in real-world robotic applications, we also review commonly used robot platforms, data collection strategies, publicly available datasets, data augmentation methods, and evaluation benchmarks. Throughout this comprehensive survey, this paper aims to offer practical guidance for the robotics community in applying VLAs to real-world robotic systems. All references categorized by training approach, evaluation method, modality, and dataset are available in the table on our project website: https://vla-survey.github.io .
Problem

Research questions and friction points this paper is trying to address.

Reviewing Vision-Language-Action models for robotics applications
Integrating software and hardware for real-world robot deployment
Providing practical guidance on VLA systems and datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified vision-language-action models for robotics
Full-stack review integrating software and hardware
Systematic coverage of architectures and deployment strategies