WMPO: World Model-based Policy Optimization for Vision-Language-Action Models

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language-action (VLA) models typically rely on expert demonstrations, lack mechanisms for self-correction from failures, and suffer from low sample efficiency when deployed in real-world robotic reinforcement learning (RL). Method: This paper proposes WMPO—a World Model-based Policy Optimization framework—that constructs a pixel-level world model aligned with pre-trained VLA visual-linguistic features, enabling policy optimization without real-world interaction. Building upon this, we introduce GRPO (Guided Reinforcement Policy Optimization), an online algorithm that performs efficient end-to-end policy learning within simulation. Contribution/Results: WMPO breaks from conventional offline training paradigms, supporting self-correction, cross-task generalization, and continual learning. Experiments demonstrate significant improvements in both sample efficiency and task performance over state-of-the-art offline VLA methods—on both simulated and physical robots.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) models have shown strong potential for general-purpose robotic manipulation, but their reliance on expert demonstrations limits their ability to learn from failures and perform self-corrections. Reinforcement learning (RL) addresses these through self-improving interactions with the physical environment, but suffers from high sample complexity on real robots. We introduce World-Model-based Policy Optimization (WMPO), a principled framework for on-policy VLA RL without interacting with the real environment. In contrast to widely used latent world models, WMPO focuses on pixel-based predictions that align the"imagined"trajectories with the VLA features pretrained with web-scale images. Crucially, WMPO enables the policy to perform on-policy GRPO that provides stronger performance than the often-used off-policy methods. Extensive experiments in both simulation and real-robot settings demonstrate that WMPO (i) substantially improves sample efficiency, (ii) achieves stronger overall performance, (iii) exhibits emergent behaviors such as self-correction, and (iv) demonstrates robust generalization and lifelong learning capabilities.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robotic manipulation by enabling vision-language-action models to learn from failures
Reducing real-world sample complexity through pixel-based world model predictions
Achieving self-correction and generalization via on-policy reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

World Model-based Policy Optimization for Vision-Language-Action models
Pixel-based predictions align imagined trajectories with VLA features
On-policy GRPO enables stronger performance than off-policy methods
🔎 Similar Papers
No similar papers found.
F
Fangqi Zhu
Hong Kong University of Science and Technology
Z
Zhengyang Yan
Hong Kong University of Science and Technology
Zicong Hong
Zicong Hong
Department of Computer Science and Engineering, Hong Kong University of Science and Technology
BlockchainML SystemEdge/Cloud Computing
Q
Quanxin Shou
Hong Kong University of Science and Technology
X
Xiao Ma
ByteDance Seed
Song Guo
Song Guo
Chair Professor of CSE, HKUST
Large Language ModelEdge AIMachine Learning Systems