Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional modular autonomous driving architectures exhibit poor robustness in long-tail scenarios, where perception errors readily cascade; existing vision-action (VA) models suffer from limited interpretability, generalization, and instruction-following capability. Method: This paper systematically surveys the evolution of vision-language-action (VLA) models for autonomous driving, proposing two foundational paradigms—end-to-end and dual-system—that unify multimodal perception, structured language reasoning, and action generation. Contribution/Results: We introduce the first taxonomy and evaluation framework for VLA methods in autonomous driving, categorizing approaches by text/numeric action generation and explicit/implicit guidance. Our framework integrates vision-language models (VLMs), instruction tuning, action tokenization, hierarchical decision-making, and driving-specific datasets (e.g., DriveLM, VAD). We establish VLA as the next-generation foundational paradigm for autonomous driving, explicitly identifying core challenges—including robustness, interpretability, and instruction fidelity—and providing theoretical grounding and a development roadmap for trustworthy human–machine collaborative driving.

Technology Category

Application Category

📝 Abstract
Autonomous driving has long relied on modular "Perception-Decision-Action" pipelines, where hand-crafted interfaces and rule-based components often break down in complex or long-tailed scenarios. Their cascaded design further propagates perception errors, degrading downstream planning and control. Vision-Action (VA) models address some limitations by learning direct mappings from visual inputs to actions, but they remain opaque, sensitive to distribution shifts, and lack structured reasoning or instruction-following capabilities. Recent progress in Large Language Models (LLMs) and multimodal learning has motivated the emergence of Vision-Language-Action (VLA) frameworks, which integrate perception with language-grounded decision making. By unifying visual understanding, linguistic reasoning, and actionable outputs, VLAs offer a pathway toward more interpretable, generalizable, and human-aligned driving policies. This work provides a structured characterization of the emerging VLA landscape for autonomous driving. We trace the evolution from early VA approaches to modern VLA frameworks and organize existing methods into two principal paradigms: End-to-End VLA, which integrates perception, reasoning, and planning within a single model, and Dual-System VLA, which separates slow deliberation (via VLMs) from fast, safety-critical execution (via planners). Within these paradigms, we further distinguish subclasses such as textual vs. numerical action generators and explicit vs. implicit guidance mechanisms. We also summarize representative datasets and benchmarks for evaluating VLA-based driving systems and highlight key challenges and open directions, including robustness, interpretability, and instruction fidelity. Overall, this work aims to establish a coherent foundation for advancing human-compatible autonomous driving systems.
Problem

Research questions and friction points this paper is trying to address.

Autonomous driving modular pipelines fail in complex scenarios.
Vision-Action models lack reasoning and interpretability in driving.
Vision-Language-Action frameworks aim to improve interpretability and generalization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates visual understanding with language reasoning for driving
Unifies perception, reasoning, and planning in one model
Separates slow deliberation from fast safety-critical execution
🔎 Similar Papers
No similar papers found.