๐ค AI Summary
Diffusion-based policies achieve strong performance in online reinforcement learning but suffer from prohibitively slow inference due to iterative multi-step sampling. To address this, we propose Flow Policy Mirror Descent (FPMD), the first framework enabling single-step *explicit* policy inference. FPMD parameterizes both a flow policy and a MeanFlow policy grounded in flow matching, and integrates them into a mirror descent optimization framework. Crucially, we theoretically characterize the quantitative relationship between distributional variance and single-step discretization errorโenabling high-fidelity one-step sampling *without* knowledge distillation or consistency training. Evaluated on the MuJoCo benchmark, FPMD matches state-of-the-art diffusion policies in control performance while reducing function evaluations by two to three orders of magnitude, thereby significantly enhancing real-time capability for online decision-making.
๐ Abstract
Diffusion policies have achieved great success in online reinforcement learning (RL) due to their strong expressive capacity. However, the inference of diffusion policy models relies on a slow iterative sampling process, which limits their responsiveness. To overcome this limitation, we propose Flow Policy Mirror Descent (FPMD), an online RL algorithm that enables 1-step sampling during policy inference. Our approach exploits a theoretical connection between the distribution variance and the discretization error of single-step sampling in straight interpolation flow matching models, and requires no extra distillation or consistency training. We present two algorithm variants based on flow policy and MeanFlow policy parametrizations, respectively. Extensive empirical evaluations on MuJoCo benchmarks demonstrate that our algorithms show strong performance comparable to diffusion policy baselines while requiring hundreds of times fewer function evaluations during inference.