VLA-0: Building State-of-the-Art VLAs with Zero Modification

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models for general-purpose robotic manipulation typically require architectural modifications to frozen vision-language models (VLMs), limiting modularity and scalability. Method: We propose a minimal, plug-and-play paradigm that treats robot actions as learnable text tokens—eliminating the need to alter the VLM architecture or extend its vocabulary. To validate this approach, we introduce VLA-0, the first VLA framework systematically demonstrating the efficacy of purely textual action representations, eschewing conventional action heads or vocabulary expansion. Contribution/Results: Through instruction tuning and textual mapping of action spaces, VLA-0 achieves state-of-the-art performance on the LIBERO benchmark, outperforming larger VLAs with more parameters. It also significantly surpasses SmolVLA—a model trained on massive real-robot data—on physical robot tasks. This work establishes “frozen VLM + textualized actions” as a highly efficient, scalable, and modular paradigm for VLA design.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action models (VLAs) hold immense promise for enabling generalist robot manipulation. However, the best way to build them remains an open question. Current approaches often add complexity, such as modifying the existing vocabulary of a Vision-Language Model (VLM) with action tokens or introducing special action heads. Curiously, the simplest strategy of representing actions directly as text has remained largely unexplored. This work introduces VLA-0 to investigate this idea. We find that VLA-0 is not only effective; it is surprisingly powerful. With the right design, VLA-0 outperforms more involved models. On LIBERO, a popular benchmark for evaluating VLAs, VLA-0 outperforms all existing methods trained on the same robotic data, including $π_0.5$-KI, OpenVLA-OFT and SmolVLA. Furthermore, without large-scale robotics-specific training, it outperforms methods trained on large-scale robotic data, like $π_0.5$-KI, $π_0$, GR00T-N1 and MolmoAct. These findings also translate to the real world, where VLA-0 outperforms SmolVLA, a VLA model pre-trained on large-scale real data. This paper summarizes our unexpected findings and spells out the specific techniques required to unlock the high performance of this simple yet potent VLA design. Visual results, code, and trained models are provided here: https://vla0.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Exploring text-based action representation for Vision-Language-Action models
Simplifying VLA design without complex modifications or special heads
Achieving state-of-the-art robot manipulation with minimal architectural changes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Representing robot actions directly as text tokens
Using unmodified vision-language model architecture
Achieving state-of-the-art performance without model modifications
🔎 Similar Papers
No similar papers found.