🤖 AI Summary
Vision-Language-Action (VLA) models rely heavily on costly human demonstration data, limiting their scalability. To address this, we propose Diffusion-RL—a novel framework that synergistically integrates diffusion modeling with reinforcement learning. Leveraging the implicit regularization inherent in diffusion processes, our method autonomously generates high-quality, low-variance, temporally smooth, and semantically consistent training trajectories—effectively mitigating exploration challenges in sparse-reward and long-horizon tasks. Crucially, Diffusion-RL operates without human demonstrations: it iteratively refines policies via denoising-based optimization, enabling self-supervised behavioral acquisition while preserving action diversity and enhancing structural coherence. Evaluated on the LIBERO benchmark, Diffusion-RL achieves an average success rate of 81.9%, outperforming supervised human-demonstration baselines by 5.3%. This advancement significantly promotes the development of VLA models toward self-supervised, low-cost, and generalizable robotic learning.
📝 Abstract
Vision-language-action (VLA) models have shown strong generalization across tasks and embodiments; however, their reliance on large-scale human demonstrations limits their scalability owing to the cost and effort of manual data collection. Reinforcement learning (RL) offers a potential alternative to generate demonstrations autonomously, yet conventional RL algorithms often struggle on long-horizon manipulation tasks with sparse rewards. In this paper, we propose a modified diffusion policy optimization algorithm to generate high-quality and low-variance trajectories, which contributes to a diffusion RL-powered VLA training pipeline. Our algorithm benefits from not only the high expressiveness of diffusion models to explore complex and diverse behaviors but also the implicit regularization of the iterative denoising process to yield smooth and consistent demonstrations. We evaluate our approach on the LIBERO benchmark, which includes 130 long-horizon manipulation tasks, and show that the generated trajectories are smoother and more consistent than both human demonstrations and those from standard Gaussian RL policies. Further, training a VLA model exclusively on the diffusion RL-generated data achieves an average success rate of 81.9%, which outperforms the model trained on human data by +5.3% and that on Gaussian RL-generated data by +12.6%. The results highlight our diffusion RL as an effective alternative for generating abundant, high-quality, and low-variance demonstrations for VLA models.