RL-VLA$^3$: Reinforcement Learning VLA Accelerating via Full Asynchronism

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first fully asynchronous Vision-Language-Action (VLA) training framework that spans the entire pipeline—from environment interaction and policy generation to model update—addressing the low resource utilization and throughput limitations inherent in synchronous reinforcement learning setups. By introducing a multi-level decoupled architecture and a streaming execution mechanism, the framework enables asynchronous parallelism across environment interaction, policy inference, and training scheduling. This design substantially enhances system throughput and scalability, achieving up to a 126.67% throughput improvement over synchronous baselines on the LIBERO benchmark and demonstrating strong scaling efficiency across GPU configurations ranging from 8 to 256.

Technology Category

Application Category

📝 Abstract
In recent years, Vision-Language-Action (VLA) models have emerged as a crucial pathway towards general embodied intelligence, yet their training efficiency has become a key bottleneck. Although existing reinforcement learning (RL)-based training frameworks like RLinf can enhance model generalization, they still rely on synchronous execution, leading to severe resource underutilization and throughput limitations during environment interaction, policy generation (rollout), and model update phases (actor). To overcome this challenge, this paper, for the first time, proposes and implements a fully-asynchronous policy training framework encompassing the entire pipeline from environment interaction, rollout generation, to actor policy updates. Systematically drawing inspiration from asynchronous optimization ideas in large model RL, our framework designs a multi-level decoupled architecture. This includes asynchronous parallelization of environment interaction and trajectory collection, streaming execution for policy generation, and decoupled scheduling for training updates. We validated the effectiveness of our method across diverse VLA models and environments. On the LIBERO benchmark, the framework achieves throughput improvements of up to 59.25\% compared to existing synchronous strategies. When deeply optimizing separation strategies, throughput can be increased by as much as 126.67\%. We verified the effectiveness of each asynchronous component via ablation studies. Scaling law validation across 8 to 256 GPUs demonstrates our method's excellent scalability under most conditions.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language-Action
Reinforcement Learning
Training Efficiency
Synchronous Execution
Throughput Bottleneck
Innovation

Methods, ideas, or system contributions that make the work stand out.

fully-asynchronous
Vision-Language-Action
reinforcement learning
multi-level decoupling
scalable training
🔎 Similar Papers
No similar papers found.