🤖 AI Summary
This work addresses the inefficiency in training diffusion and flow-based policies in online reinforcement learning, which stems from the absence of direct samples from the target distribution. The authors propose a reverse flow matching framework that formulates policy learning as posterior mean estimation under noisy samples, thereby unifying the training of both policy classes. This framework extends the Boltzmann distribution objective—previously limited to diffusion policies—to flow policies for the first time, and constructs a minimum-variance estimator by integrating Q-values with gradient information. Furthermore, it introduces reverse inference and a Langevin–Stein operator to derive zero-mean control variates, enabling variance reduction through importance sampling. Empirical results on continuous control benchmarks demonstrate that the resulting flow policies significantly outperform existing diffusion-based baselines, achieving higher sample efficiency and training stability.
📝 Abstract
Diffusion and flow policies are gaining prominence in online reinforcement learning (RL) due to their expressive power, yet training them efficiently remains a critical challenge. A fundamental difficulty in online RL is the lack of direct samples from the target distribution; instead, the target is an unnormalized Boltzmann distribution defined by the Q-function. To address this, two seemingly distinct families of methods have been proposed for diffusion policies: a noise-expectation family, which utilizes a weighted average of noise as the training target, and a gradient-expectation family, which employs a weighted average of Q-function gradients. Yet, it remains unclear how these objectives relate formally or if they can be synthesized into a more general formulation. In this paper, we propose a unified framework, reverse flow matching (RFM), which rigorously addresses the problem of training diffusion and flow models without direct target samples. By adopting a reverse inferential perspective, we formulate the training target as a posterior mean estimation problem given an intermediate noisy sample. Crucially, we introduce Langevin Stein operators to construct zero-mean control variates, deriving a general class of estimators that effectively reduce importance sampling variance. We show that existing noise-expectation and gradient-expectation methods are two specific instances within this broader class. This unified view yields two key advancements: it extends the capability of targeting Boltzmann distributions from diffusion to flow policies, and enables the principled combination of Q-value and Q-gradient information to derive an optimal, minimum-variance estimator, thereby improving training efficiency and stability. We instantiate RFM to train a flow policy in online RL, and demonstrate improved performance on continuous-control benchmarks compared to diffusion policy baselines.