🤖 AI Summary
This work addresses the high computational cost and numerical instability encountered when extending policy-based reinforcement learning algorithms like PPO to continuous normalizing flow (CNF) policies, which typically require likelihood evaluation along full flow trajectories. To overcome these challenges, the authors propose PolicyFlow, a novel on-policy algorithm that approximates importance ratios by leveraging velocity field variations along a simple interpolation path, thereby circumventing the need for full trajectory likelihood computation. Additionally, PolicyFlow incorporates a lightweight Brownian regularization term, inspired by Brownian motion, to implicitly enhance policy diversity and mitigate mode collapse. Experimental results demonstrate that PolicyFlow matches or surpasses the performance of Gaussian PPO and flow-based baselines such as FPO and DPPO across diverse environments—including MultiGoal, PointMaze, IsaacLab, and MuJoCo Playground—with particularly strong capabilities in modeling multimodal action distributions.
📝 Abstract
Among on-policy reinforcement learning algorithms, Proximal Policy Optimization (PPO) demonstrates is widely favored for its simplicity, numerical stability, and strong empirical performance. Standard PPO relies on surrogate objectives defined via importance ratios, which require evaluating policy likelihood that is typically straightforward when the policy is modeled as a Gaussian distribution. However, extending PPO to more expressive, high-capacity policy models such as continuous normalizing flows (CNFs), also known as flow-matching models, is challenging because likelihood evaluation along the full flow trajectory is computationally expensive and often numerically unstable. To resolve this issue, we propose PolicyFlow, a novel on-policy CNF-based reinforcement learning algorithm that integrates expressive CNF policies with PPO-style objectives without requiring likelihood evaluation along the full flow path. PolicyFlow approximates importance ratios using velocity field variations along a simple interpolation path, reducing computational overhead without compromising training stability. To further prevent mode collapse and further encourage diverse behaviors, we propose the Brownian Regularizer, an implicit policy entropy regularizer inspired by Brownian motion, which is conceptually elegant and computationally lightweight. Experiments on diverse tasks across various environments including MultiGoal, PointMaze, IsaacLab and MuJoCo Playground show that PolicyFlow achieves competitive or superior performance compared to PPO using Gaussian policies and flow-based baselines including FPO and DPPO. Notably, results on MultiGoal highlight PolicyFlow's ability to capture richer multimodal action distributions.