๐ค AI Summary
To address reward sparsity, temporal policy inconsistency, and training instability in reinforcement learning (RL) post-training of vision-language-action (VLA) models, this paper proposes a PPO-based framework with action chunking, integrated self-behavioral cloning (Self-BC) loss, and a dynamically updated demonstration buffer, alongside an online dual-objective loss weighting strategy. Key contributions are: (1) segmenting continuous action sequences into semantically coherent action chunks to improve temporal policy modeling; (2) constructing a dynamic demonstration buffer from self-collected high-quality trajectories to increase feedback density; and (3) jointly optimizing RL and BC objectives for stable, efficient training. Evaluated on MetaWorld, our method achieves a success rate of 0.93 and reduces average task completion steps to 42.17โsubstantially outperforming supervised fine-tuning baselines. Results demonstrate the effectiveness and robustness of the proposed approach for RL post-training of VLA models.
๐ Abstract
Reinforcement learning (RL) is a promising avenue for post-training vision-language-action (VLA) models, but practical deployment is hindered by sparse rewards and unstable training. This work mitigates these challenges by introducing an action chunk based on proximal policy optimization (PPO) with behavior cloning using self-collected demonstrations. Aggregating consecutive actions into chunks improves the temporal consistency of the policy and the density of informative feedback. In addition, an auxiliary behavior cloning loss is applied with a dynamically updated demonstration buffer that continually collects high-quality task trials during training. The relative weight between the action-chunked PPO objective and the self behavior clone auxiliary loss is adapted online to stabilize the post-training process. Experiments on the MetaWorld benchmark indicate improved performance over supervised fine-tuning, achieving a high success rate (0.93) and few steps to success (42.17). These results demonstrate the viability of RL for VLA post-training and help lay the groundwork for downstream VLA applications.