Reinforcing Action Policies by Prophesying

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
VLA models suffer from poor generalization and weak out-of-distribution robustness due to overreliance on imitation learning alone. To address this, we propose ProphRL: a framework that first constructs a unified, foresight-oriented world model—termed Prophet—that maps actions to future video frames and enables few-shot transfer; then integrates flow-based generative modeling, action-head gradient reweighting (FlowScale), and FA-GRPO, a VLA-adapted variant of the GRPO reinforcement learning algorithm, to achieve data- and compute-efficient policy post-training. Evaluated on both simulation and real-world robotic platforms, ProphRL significantly improves task success rates—by 5–17% on public benchmarks and by 24–30% on real-robot tasks—demonstrating strong cross-model and cross-platform generalization as well as practical deployability.

Technology Category

Application Category

📝 Abstract
Vision-Language-Action (VLA) policies excel in aligning language, perception, and robot control. However, most VLAs are trained purely by imitation, which overfits to demonstrations, and is brittle under distribution shift. Reinforcement learning (RL) directly optimizes task reward and thus addresses this misalignment, but real-robot interaction is expensive and conventional simulators are hard to engineer and transfer. We address both data efficiency and optimization stability in VLA post-training via a learned world model and an RL procedure tailored to flow-based action heads. Specifically, we introduce Prophet, a unified action-to-video robot actuation pretrained across large-scale, heterogeneous robot data to learn reusable action-outcome dynamics. It is able to few-shot adapt to new robots, objects, and environments, yielding a rollout-ready simulator. Upon Prophet, we reinforce action policies with Flow-action-GRPO (FA-GRPO), which adapts Flow-GRPO to operate on VLA actions, and with FlowScale, a stepwise reweighting that rescales per-step gradients in the flow head. Together, Prophet, FA-GRPO, and FlowScale constitute ProphRL, a practical, data- and compute-efficient path to VLA post-training. Experiments show 5-17% success gains on public benchmarks and 24-30% gains on real robots across different VLA variants.
Problem

Research questions and friction points this paper is trying to address.

VLA policies overfit to demonstrations and are brittle under distribution shift
Reinforcement learning requires expensive real-robot interaction and complex simulators
Current methods lack data efficiency and optimization stability in VLA post-training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learned world model for efficient robot simulation
Reinforcement learning tailored to flow-based action heads
Stepwise gradient reweighting for stable optimization
🔎 Similar Papers
No similar papers found.
J
Jiahui Zhang
School of Data Science, Fudan University
Z
Ze Huang
School of Data Science, Fudan University
Chun Gu
Chun Gu
Fudan University
3D reconstruction
Z
Zipei Ma
School of Data Science, Fudan University
L
Li Zhang
School of Data Science, Fudan University