VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model

๐Ÿ“… 2025-09-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the high computational cost and parameter inefficiency of existing vision-language-action (VLA) models, which typically rely on large-scale pretraining and billion-parameter backbones. To this end, we propose VLA-Adapter, a lightweight, end-to-end trainable framework. Methodologically, it introduces Bridge Attention for dynamic cross-modal alignment, integrates vision-language alignment injection, and employs a conditional policy moduleโ€”enabling direct perception-to-action mapping without robot-domain pretraining. Notably, VLA-Adapter achieves state-of-the-art performance using only a 0.5B-parameter backbone, trained end-to-end in under eight hours on consumer-grade GPUs. It attains superior results on both simulated and real-world robotic manipulation benchmarks, significantly reducing training cost and deployment complexity while maintaining competitive inference speed and accuracy.

Technology Category

Application Category

๐Ÿ“ Abstract
Vision-Language-Action (VLA) models typically bridge the gap between perceptual and action spaces by pre-training a large-scale Vision-Language Model (VLM) on robotic data. While this approach greatly enhances performance, it also incurs significant training costs. In this paper, we investigate how to effectively bridge vision-language (VL) representations to action (A). We introduce VLA-Adapter, a novel paradigm designed to reduce the reliance of VLA models on large-scale VLMs and extensive pre-training. To this end, we first systematically analyze the effectiveness of various VL conditions and present key findings on which conditions are essential for bridging perception and action spaces. Based on these insights, we propose a lightweight Policy module with Bridge Attention, which autonomously injects the optimal condition into the action space. In this way, our method achieves high performance using only a 0.5B-parameter backbone, without any robotic data pre-training. Extensive experiments on both simulated and real-world robotic benchmarks demonstrate that VLA-Adapter not only achieves state-of-the-art level performance, but also offers the fast inference speed reported to date. Furthermore, thanks to the proposed advanced bridging paradigm, VLA-Adapter enables the training of a powerful VLA model in just 8 hours on a single consumer-grade GPU, greatly lowering the barrier to deploying the VLA model. Project page: https://vla-adapter.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Reducing training costs of Vision-Language-Action models
Bridging vision-language representations to action spaces
Eliminating reliance on large VLMs and pre-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight Policy module with Bridge Attention
No robotic data pre-training required
Trains on single consumer GPU in 8 hours
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yihao Wang
Beijing University of Posts and Telecommunications
Pengxiang Ding
Pengxiang Ding
Zhejiang University
Human Motion PredictionLarge Language ModelEmbodied AI
L
Lingxiao Li
Beijing University of Posts and Telecommunications
C
Can Cui
Westlake University
Z
Zirui Ge
Zhejiang University
X
Xinyang Tong
Westlake University
Wenxuan Song
Wenxuan Song
The Hong Kong University of Science and Technology (Guangzhou)
Vision-language-action ModelRobotics
H
Han Zhao
Westlake University
W
Wei Zhao
Westlake University
P
Pengxu Hou
The Hong Kong University of Science and Technology (Guangzhou)
Siteng Huang
Siteng Huang
Alibaba DAMO Academy | ZJU | Westlake University
Vision-language ModelsGenerative ModelsEmbodied AI
Yifan Tang
Yifan Tang
SF Motors Inc
W
Wenhui Wang
Beijing University of Posts and Telecommunications
R
Ru Zhang
Beijing University of Posts and Telecommunications
J
Jianyi Liu
Beijing University of Posts and Telecommunications
D
Donglin Wang
Westlake University