SRPO: Self-Referential Policy Optimization for Vision-Language-Action Models

πŸ“… 2025-11-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Vision-language-action (VLA) models suffer from poor generalization and inefficient training due to reliance on expert demonstrations and sparse reward signals. To address this, we propose a self-supervised reinforcement learning framework that requires no external demonstrations or handcrafted reward design. Our method leverages successful trajectories within the same batch as self-references and quantifies behavioral progress in the latent space of a learned world model, enabling fine-grained progress-based reward assignment for failed trajectories. Integrated with autoregressive policy optimization and vision-language pretrained models, the framework enables end-to-end learning. On the LIBERO benchmark, success rate improves from 48.9% to 99.2%, achieving a 103% relative gain within only 200 training steps; on LIBERO-Plus, performance increases by 167%, substantially outperforming existing VLA-RL approaches. The core innovation lies in a self-referential progress reward mechanism grounded in contrastive trajectory comparison within the world model’s latent space.

Technology Category

Application Category

πŸ“ Abstract
Vision-Language-Action (VLA) models excel in robotic manipulation but are constrained by their heavy reliance on expert demonstrations, leading to demonstration bias and limiting performance. Reinforcement learning (RL) is a vital post-training strategy to overcome these limits, yet current VLA-RL methods, including group-based optimization approaches, are crippled by severe reward sparsity. Relying on binary success indicators wastes valuable information in failed trajectories, resulting in low training efficiency. To solve this, we propose Self-Referential Policy Optimization (SRPO), a novel VLA-RL framework. SRPO eliminates the need for external demonstrations or manual reward engineering by leveraging the model's own successful trajectories, generated within the current training batch, as a self-reference. This allows us to assign a progress-wise reward to failed attempts. A core innovation is the use of latent world representations to measure behavioral progress robustly. Instead of relying on raw pixels or requiring domain-specific fine-tuning, we utilize the compressed, transferable encodings from a world model's latent space. These representations naturally capture progress patterns across environments, enabling accurate, generalized trajectory comparison. Empirical evaluations on the LIBERO benchmark demonstrate SRPO's efficiency and effectiveness. Starting from a supervised baseline with 48.9% success, SRPO achieves a new state-of-the-art success rate of 99.2% in just 200 RL steps, representing a 103% relative improvement without any extra supervision. Furthermore, SRPO shows substantial robustness, achieving a 167% performance improvement on the LIBERO-Plus benchmark.
Problem

Research questions and friction points this paper is trying to address.

Overcoming demonstration bias in vision-language-action models for robotics
Addressing severe reward sparsity in VLA-RL training methods
Improving training efficiency by utilizing failed trajectory information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses successful trajectories as self-reference for rewards
Employs latent world representations for progress measurement
Eliminates need for external demonstrations or reward engineering
πŸ”Ž Similar Papers
No similar papers found.
S
Senyu Fei
Tongji University
S
Siyin Wang
Fudan University
L
Li Ji
Fudan University
A
Ao Li
Shanghai Innovation Institute
Shiduo Zhang
Shiduo Zhang
Fudan University
Embodied AIFoundation Models
L
Liming Liu
Shanghai Innovation Institute
Jinlong Hou
Jinlong Hou
Shanghai Innovation Institute (SII)
machine learningdeep learninghigh performance computingdrug discoverymedical
Jingjing Gong
Jingjing Gong
SII
Machine LearningAI for ScienceLarge Language ModelEmbodied AI
X
Xianzhong Zhao
Tongji University
X
Xipeng Qiu
Fudan University