Mean Flow Policy with Instantaneous Velocity Constraint for One-step Action Generation

πŸ“… 2026-02-14
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the longstanding trade-off between expressiveness and computational efficiency in flow-based reinforcement learning policies, which often struggle to enable both high-fidelity and fast single-step action generation. To overcome this limitation, we propose the Mean Velocity Policy (MVP), which models a mean velocity field to facilitate rapid single-step sampling. Crucially, MVP introduces Instantaneous Velocity Constraints (IVC) as essential boundary conditions in policy learningβ€”a novel formulation that we theoretically prove enhances both policy expressiveness and learning accuracy. By integrating deterministic sampling, MVP achieves state-of-the-art success rates across multiple robotic manipulation tasks in Robomimic and OGBench, while simultaneously offering substantial improvements in both training and inference speed.

Technology Category

Application Category

πŸ“ Abstract
Learning expressive and efficient policy functions is a promising direction in reinforcement learning (RL). While flow-based policies have recently proven effective in modeling complex action distributions with a fast deterministic sampling process, they still face a trade-off between expressiveness and computational burden, which is typically controlled by the number of flow steps. In this work, we propose mean velocity policy (MVP), a new generative policy function that models the mean velocity field to achieve the fastest one-step action generation. To ensure its high expressiveness, an instantaneous velocity constraint (IVC) is introduced on the mean velocity field during training. We theoretically prove that this design explicitly serves as a crucial boundary condition, thereby improving learning accuracy and enhancing policy expressiveness. Empirically, our MVP achieves state-of-the-art success rates across several challenging robotic manipulation tasks from Robomimic and OGBench. It also delivers substantial improvements in training and inference speed over existing flow-based policy baselines.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
flow-based policy
action generation
expressiveness
computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mean Velocity Policy
Instantaneous Velocity Constraint
One-step Action Generation
Flow-based Policy
Reinforcement Learning
πŸ”Ž Similar Papers
No similar papers found.
G
Guojian Zhan
School of Vehicle and Mobility & College of AI, Tsinghua University
L
Letian Tao
School of Vehicle and Mobility & College of AI, Tsinghua University
Pengcheng Wang
Pengcheng Wang
UC Berkeley
RoboticsControlReinforcement Learning
Yixiao Wang
Yixiao Wang
University of California, Berkeley
roboticsdiffusion modelstrajectory prediction
Y
Yiheng Li
Berkeley AI Research (BAIR), UC Berkeley
Yuxin Chen
Yuxin Chen
University of California, Berkeley
roboticsreinforcement learning
Masayoshi Tomizuka
Masayoshi Tomizuka
Mechaniccal Engineering, University of California
mechanical engineeringdynamic systemscontrolmechatronics
S
Shengbo Eben Li
School of Vehicle and Mobility & College of AI, Tsinghua University