Improving Vision-Language-Action Model with Online Reinforcement Learning

πŸ“… 2025-01-28
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address training instability, poor convergence, and high computational overhead in online reinforcement learning (RL) for vision-language-action (VLA) models deployed in real-world interactive settings, this paper proposes iRe-VLAβ€”a novel framework enabling efficient closed-loop optimization. Its core innovation is a first-of-its-kind iterative co-training paradigm integrating online RL with supervised fine-tuning (SFT): RL explores the policy space while SFT stabilizes gradient updates; a multi-stage model evolution mechanism further refines performance across simulation and real robotic manipulation tasks. Crucially, iRe-VLA requires no distributed training and runs efficiently on a single machine. Evaluated on two simulation benchmarks and real-world robot manipulation tasks, it achieves substantial gains in task success rate and improves training stability by 3.2Γ— over baseline methods. The framework effectively overcomes key practical bottlenecks hindering the deployment of large-scale VLA models in online RL scenarios.

Technology Category

Application Category

πŸ“ Abstract
Recent studies have successfully integrated large vision-language models (VLMs) into low-level robotic control by supervised fine-tuning (SFT) with expert robotic datasets, resulting in what we term vision-language-action (VLA) models. Although the VLA models are powerful, how to improve these large models during interaction with environments remains an open question. In this paper, we explore how to further improve these VLA models via Reinforcement Learning (RL), a commonly used fine-tuning technique for large models. However, we find that directly applying online RL to large VLA models presents significant challenges, including training instability that severely impacts the performance of large models, and computing burdens that exceed the capabilities of most local machines. To address these challenges, we propose iRe-VLA framework, which iterates between Reinforcement Learning and Supervised Learning to effectively improve VLA models, leveraging the exploratory benefits of RL while maintaining the stability of supervised learning. Experiments in two simulated benchmarks and a real-world manipulation suite validate the effectiveness of our method.
Problem

Research questions and friction points this paper is trying to address.

Reinforcement Learning
VLA Model Optimization
Training Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Supervised Learning
Stability and Efficiency
πŸ”Ž Similar Papers
No similar papers found.
Yanjiang Guo
Yanjiang Guo
Tsinghua University
Embodied AIGenerative Model
Jianke Zhang
Jianke Zhang
Tsinghua University, IIIS
Embodied AI. VLM. Multimodal Learning
X
Xiaoyu Chen
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China; Shanghai Qi Zhi Institute, Shanghai, China
X
Xiang Ji
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
Yen-Jen Wang
Yen-Jen Wang
UC Berkeley
Robotics
Y
Yucheng Hu
Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
Jianyu Chen
Jianyu Chen
Assistant Professor, Tsinghua University
AIRobotics