VLA-RL: Towards Masterful and General Robotic Manipulation with Scalable Reinforcement Learning

πŸ“… 2025-05-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
General-purpose robots often fail in out-of-distribution (OOD) scenarios due to insufficient coverage of offline datasets. Method: This paper proposes an online reinforcement learning (RL)-driven adaptation framework for pretrained vision-language-action (VLA) models. It introduces (1) a novel trajectory-level autoregressive RL formulation, modeling robotic manipulation as multimodal, multi-turn dialog; (2) a pseudo-label–based vision-language reward model; and (3) engineering optimizations including curriculum-based episode selection, GPU-balanced vectorized environments, and batched decoding. Results: On the LIBERO-40 benchmark, fine-tuning OpenVLA-7B with this framework improves success rate by 4.5%, significantly surpassing the best supervised fine-tuning baseline and matching state-of-the-art commercial models such as Ο€β‚€-FAST. Moreover, it provides the first empirical validation in robotics of scaling laws for inference-time optimization.

Technology Category

Application Category

πŸ“ Abstract
Recent high-capacity vision-language-action (VLA) models have demonstrated impressive performance on a range of robotic manipulation tasks by imitating human demonstrations. However, exploiting offline data with limited visited states will cause execution failure in out-of-distribution scenarios. Intuitively, an exploration-based method that improves on online collected data at test time could address this limitation. We present VLA-RL, an algorithmic and systematic framework that leverages online reinforcement learning (RL) to improve pretrained auto-regressive VLAs in downstream tasks. Within a unified perspective, we first introduce a trajectory-level RL formulation for auto-regressive VLA training, which models general robotic manipulation trajectory as multi-modal multi-turn conversation. To address the challenge of sparse rewards, we fine-tune a pretrained vision-language model as a robotic process reward model, which is trained on pseudo reward labels annotated on automatically extracted task segments. To scale up, we identify several implementation findings that improve the stability and efficiency including curriculum selection strategy, GPU-balanced vectorized environments, batch decoding, and critic warmup. VLA-RL enables OpenVLA-7B to surpass the strongest finetuned baseline by 4.5% on 40 challenging robotic manipulation tasks in LIBERO, and even matches the performance of advanced commercial models such as $pi_0$-FAST. Notably, we observe that VLA-RL benefits from increased test-time optimization, indicating an early spark of inference scaling laws in robotics.
Problem

Research questions and friction points this paper is trying to address.

Improving robotic manipulation in out-of-distribution scenarios
Addressing sparse rewards in auto-regressive VLA training
Scaling reinforcement learning for stable and efficient performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses online RL to enhance pretrained VLA models
Trains robotic process reward model with pseudo labels
Implements GPU-balanced vectorized environments for scaling
πŸ”Ž Similar Papers
No similar papers found.
Guanxing Lu
Guanxing Lu
Tsinghua University
VLARLRobotics3D Vision
W
Wenkai Guo
School of Electrical and Electronic Engineering, Nanyang Technological University
Chubin Zhang
Chubin Zhang
Tsinghua University
Embodied AI3D Vision
Y
Yuheng Zhou
School of Electrical and Electronic Engineering, Nanyang Technological University
H
Haonan Jiang
Tsinghua Shenzhen International Graduate School, Tsinghua University
Z
Zifeng Gao
Tsinghua Shenzhen International Graduate School, Tsinghua University
Y
Yansong Tang
Tsinghua Shenzhen International Graduate School, Tsinghua University
Z
Ziwei Wang
School of Electrical and Electronic Engineering, Nanyang Technological University