TinyVLA: Toward Fast, Data-Efficient Vision-Language-Action Models for Robotic Manipulation

📅 2024-09-19
🏛️ IEEE Robotics and Automation Letters
📈 Citations: 42
Influential: 2
📄 PDF
🤖 AI Summary
Existing vision-language-action (VLA) models suffer from slow inference, heavy reliance on large-scale robot pretraining data, and poor deployability in dexterous manipulation tasks. Method: We propose TinyVLA—a pretraining-free, lightweight VLA architecture integrating a high-speed multimodal backbone with a diffusion-based policy decoder, employing a robust multimodal initialization strategy and supporting end-to-end joint fine-tuning across simulation and real robots. Contribution/Results: To our knowledge, TinyVLA is the first VLA model to eliminate pretraining entirely. It achieves real-time inference speed while reducing required training data by an order of magnitude. On multi-scenario generalization benchmarks, TinyVLA matches or surpasses OpenVLA—establishing new state-of-the-art (SOTA) performance without any pretraining overhead.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have shown remarkable potential in visuomotor control and instruction comprehension through end-to-end learning processes. However, current VLA models face significant challenges: they are slow during inference and require extensive pre-training on large amounts of robotic data, making real-world deployment difficult. In this letter, we introduce a new family of compact vision-language-action models, called TinyVLA, which offers two key advantages over existing VLA models: (1) faster inference speeds, and (2) improved data efficiency, eliminating the need for pre-training stage. Our framework incorporates two essential components to build TinyVLA: (1) initializing the policy backbone with robust, high-speed multimodal models, and (2) integrating a diffusion policy decoder during fine-tuning to enable precise robot actions. We conducted extensive evaluations of TinyVLA in both simulation and on real robots, demonstrating that our approach significantly outperforms the state-of-the-art VLA model, OpenVLA, in terms of speed and data efficiency, while delivering comparable or superior performance. Additionally, TinyVLA exhibits strong generalization capabilities across various dimensions, including language instructions, novel objects, unseen positions, changes in object appearance, background variations, and environmental shifts, often matching or exceeding the performance of OpenVLA. We believe that TinyVLA offers an interesting perspective on utilizing pre-trained multimodal models for policy learning.
Problem

Research questions and friction points this paper is trying to address.

Slow inference speeds in current VLA models
High data requirements for pre-training VLA models
Difficulty in real-world deployment of VLA models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compact VLA models for faster inference
Data-efficient without pre-training stage
Diffusion policy decoder for precise actions
🔎 Similar Papers
No similar papers found.
Junjie Wen
Junjie Wen
School of Computer Science, East China Normal University, China
Y
Yichen Zhu
Midea Group, AI Lab, China
Jinming Li
Jinming Li
Shanghai University
Embodied IntellengienceRobotics
Minjie Zhu
Minjie Zhu
East China Normal University
MLLMrobotics
K
Kun Wu
Syracuse University, USA
Z
Zhiyuan Xu
Beijing Innovation Center of Humanoid Robotics, China
N
Ning Liu
Midea Group, AI Lab, China
R
Ran Cheng
Midea Group, AI Lab, China
Chaomin Shen
Chaomin Shen
Dept of Computer Science, East China Normal University
Image ProcessingMachine Learning
Y
Yaxin Peng
Department of Computer Science, Shanghai University, China
Feifei Feng
Feifei Feng
Midea Group
J
Jian Tang
Beijing Innovation Center of Humanoid Robotics, China