Interactive Post-Training for Vision-Language-Action Models

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Vision-Language-Action (VLA) models rely heavily on offline expert demonstrations and supervised imitation learning, limiting their adaptability to novel tasks and environments under low-data regimes. To address this, we propose RIPT-VLA—a reinforcement learning–based interactive post-training paradigm that leverages sparse binary success rewards, eliminating dependence on expert trajectories. Its core innovations include dynamic rollout sampling and a leave-one-out advantage estimator, enabling efficient fine-tuning with zero or minimal demonstrations. Experiments demonstrate substantial improvements: a 21.2% success rate gain on QueST and 97.5% on OpenVLA-OFT. Notably, with only one demonstration and 15 training iterations, RIPT-VLA elevates the success rate of a previously failing model from 4% to 97%, significantly enhancing cross-task generalization and robustness to varying initial states.

Technology Category

Application Category

📝 Abstract
We introduce RIPT-VLA, a simple and scalable reinforcement-learning-based interactive post-training paradigm that fine-tunes pretrained Vision-Language-Action (VLA) models using only sparse binary success rewards. Existing VLA training pipelines rely heavily on offline expert demonstration data and supervised imitation, limiting their ability to adapt to new tasks and environments under low-data regimes. RIPT-VLA addresses this by enabling interactive post-training with a stable policy optimization algorithm based on dynamic rollout sampling and leave-one-out advantage estimation. RIPT-VLA has the following characteristics. First, it applies to various VLA models, resulting in an improvement on the lightweight QueST model by 21.2%, and the 7B OpenVLA-OFT model to an unprecedented 97.5% success rate. Second, it is computationally efficient and data-efficient: with only one demonstration, RIPT-VLA enables an unworkable SFT model (4%) to succeed with a 97% success rate within 15 iterations. Furthermore, we demonstrate that the policy learned by RIPT-VLA generalizes across different tasks and scenarios and is robust to the initial state context. These results highlight RIPT-VLA as a practical and effective paradigm for post-training VLA models through minimal supervision.
Problem

Research questions and friction points this paper is trying to address.

Enhances VLA models with sparse binary rewards
Overcomes limitations of offline expert demonstration data
Improves adaptability to new tasks and environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement-learning-based interactive post-training paradigm
Dynamic rollout sampling and advantage estimation
Improves VLA models with minimal supervision
🔎 Similar Papers
No similar papers found.
Shuhan Tan
Shuhan Tan
UT Austin
Machine LearningComputer Vision
K
Kairan Dou
Nankai University
Y
Yue Zhao
UT Austin
P
Philipp Krahenbuhl
UT Austin