🤖 AI Summary
This work addresses the suboptimality of flow matching (FM) policies in variable-horizon tasks—such as minimum-time control—when trained solely on imperfect expert demonstrations (e.g., human teleoperation). To overcome this limitation, we propose two reinforcement learning (RL)-enhanced FM methods: Reward-Weighted Flow Matching (RWFM) and Group Relative Policy Optimization with a learned reward surrogate (GRPO). Both integrate RL principles deeply into the FM framework, incorporating reward-weighted trajectory sampling, population-based comparative optimization, and explicit variable-horizon planning. Evaluated on simulated single-track vehicle dynamics, our methods significantly outperform standard imitation-based FM: GRPO reduces execution cost by 50–85%, while RWFM delivers consistent improvements. To our knowledge, this is the first systematic integration of RL into the FM paradigm to break the performance ceiling imposed by demonstration quality, establishing a new pathway toward high-precision, low-latency optimal control.
📝 Abstract
Flow-matching policies have emerged as a powerful paradigm for generalist robotics. These models are trained to imitate an action chunk, conditioned on sensor observations and textual instructions. Often, training demonstrations are generated by a suboptimal policy, such as a human operator. This work explores training flow-matching policies via reinforcement learning to surpass the original demonstration policy performance. We particularly note minimum-time control as a key application and present a simple scheme for variable-horizon flow-matching planning. We then introduce two families of approaches: a simple Reward-Weighted Flow Matching (RWFM) scheme and a Group Relative Policy Optimization (GRPO) approach with a learned reward surrogate. Our policies are trained on an illustrative suite of simulated unicycle dynamics tasks, and we show that both approaches dramatically improve upon the suboptimal demonstrator performance, with the GRPO approach in particular generally incurring between $50%$ and $85%$ less cost than a naive Imitation Learning Flow Matching (ILFM) approach.