🤖 AI Summary
In online reinforcement learning (RL), policy representations based on normalizing flows suffer from a fundamental objective mismatch with value-driven optimization. This paper introduces FlowRL, the first framework to incorporate differentiable flow models into online RL: it defines an ordinary differential equation (ODE)-based policy via a state-dependent velocity field and jointly optimizes the Q-function maximization objective with Wasserstein-2 distance regularization, enabling value-aware dynamic action generation. FlowRL bridges the intrinsic gap between the density estimation objective of flow modeling and the policy optimization objective in RL. Evaluated on DMControl and HumanoidBench benchmarks, FlowRL achieves state-of-the-art performance among online RL algorithms, demonstrating significantly improved modeling capacity for multimodal and non-Gaussian action distributions, as well as enhanced policy stability.
📝 Abstract
We present extbf{FlowRL}, a novel framework for online reinforcement learning that integrates flow-based policy representation with Wasserstein-2-regularized optimization. We argue that in addition to training signals, enhancing the expressiveness of the policy class is crucial for the performance gains in RL. Flow-based generative models offer such potential, excelling at capturing complex, multimodal action distributions. However, their direct application in online RL is challenging due to a fundamental objective mismatch: standard flow training optimizes for static data imitation, while RL requires value-based policy optimization through a dynamic buffer, leading to difficult optimization landscapes. FlowRL first models policies via a state-dependent velocity field, generating actions through deterministic ODE integration from noise. We derive a constrained policy search objective that jointly maximizes Q through the flow policy while bounding the Wasserstein-2 distance to a behavior-optimal policy implicitly derived from the replay buffer. This formulation effectively aligns the flow optimization with the RL objective, enabling efficient and value-aware policy learning despite the complexity of the policy class. Empirical evaluations on DMControl and Humanoidbench demonstrate that FlowRL achieves competitive performance in online reinforcement learning benchmarks.