A Survey on Vision-Language-Action Models: An Action Tokenization Perspective

๐Ÿ“… 2025-07-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current Vision-Language-Action (VLA) models lack a systematic understanding of action tokens, hindering principled model design and research direction. This work establishes action tokenization as a unifying analytical lens, proposing the first taxonomy of VLA models grounded in eight distinct action tokenization paradigms. We introduce a comprehensive analytical framework evaluating representation capacity, generalizability, and physical realizability. Leveraging multimodal machine learning principles and systematic literature review, we characterize fundamental trade-offs among accuracy, computational efficiency, and deployment feasibility across paradigmsโ€”and identify critical domain blind spots. Our analysis yields a structured cognitive framework for VLA models, clarifies the appropriate application contexts and intrinsic limitations of each action representation, and provides a reproducible methodology and clear evolutionary roadmap for generalizable, embodied-action modeling.

Technology Category

Application Category

๐Ÿ“ Abstract
The remarkable advancements of vision and language foundation models in multimodal understanding, reasoning, and generation has sparked growing efforts to extend such intelligence to the physical world, fueling the flourishing of vision-language-action (VLA) models. Despite seemingly diverse approaches, we observe that current VLA models can be unified under a single framework: vision and language inputs are processed by a series of VLA modules, producing a chain of extit{action tokens} that progressively encode more grounded and actionable information, ultimately generating executable actions. We further determine that the primary design choice distinguishing VLA models lies in how action tokens are formulated, which can be categorized into language description, code, affordance, trajectory, goal state, latent representation, raw action, and reasoning. However, there remains a lack of comprehensive understanding regarding action tokens, significantly impeding effective VLA development and obscuring future directions. Therefore, this survey aims to categorize and interpret existing VLA research through the lens of action tokenization, distill the strengths and limitations of each token type, and identify areas for improvement. Through this systematic review and analysis, we offer a synthesized outlook on the broader evolution of VLA models, highlight underexplored yet promising directions, and contribute guidance for future research, hoping to bring the field closer to general-purpose intelligence.
Problem

Research questions and friction points this paper is trying to address.

Surveying VLA models through action tokenization perspective
Analyzing strengths and limitations of action token types
Identifying future directions for VLA model development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified VLA framework with action tokens
Action token categories for VLA models
Systematic review of action tokenization
๐Ÿ”Ž Similar Papers
No similar papers found.
Yifan Zhong
Yifan Zhong
Peking University
VLA ModelsDexterous ManipulationReinforcement Learning
Fengshuo Bai
Fengshuo Bai
Shanghai Jiao Tong University
Embodied AIAI AlignmentReinforcement LearningPreference-based Learning
S
Shaofei Cai
Institute for AI, Peking University, PKU-PsiBot Joint Lab
Xuchuan Huang
Xuchuan Huang
Peking University
Robot LearningDexterous Manipulation
Z
Zhang Chen
Institute for AI, Peking University, PKU-PsiBot Joint Lab
X
Xiaowei Zhang
Institute for AI, Peking University, PKU-PsiBot Joint Lab
Yuanfei Wang
Yuanfei Wang
Peking University
robot learningreinforcement learning
Shaoyang Guo
Shaoyang Guo
Peking University
PhysicsAI
Tianrui Guan
Tianrui Guan
Waymo
Computer VisionPerceptionRoboticsVLM
K
Ka Nam Lui
Institute for AI, Peking University, PKU-PsiBot Joint Lab
Z
Zhiquan Qi
Institute for AI, Peking University, PKU-PsiBot Joint Lab
Yitao Liang
Yitao Liang
Peking University
Machine LearningAI ReasoningAI Agent
Yuanpei Chen
Yuanpei Chen
South China University of Technology
Robotic
Y
Yaodong Yang
Institute for AI, Peking University, PKU-PsiBot Joint Lab