🤖 AI Summary
This work addresses the challenge of applying entropy-regularized reinforcement learning to implicit stochastic policies in continuous action spaces, where conventional methods rely on explicit policy densities and their gradients. Leveraging a Wasserstein geometric perspective, the authors propose a novel policy optimization algorithm that alternates between optimal transport proximal updates and Gaussian convolution heat steps via operator splitting. This approach eliminates the need to compute the log-density of the policy or its gradients explicitly. Notably, it is the first to integrate Wasserstein proximal updates into a policy gradient framework, enabling the use of implicit policies defined through pushforward mappings. The method enjoys a theoretical guarantee of global linear convergence, features a simple implementation, and demonstrates strong empirical performance on standard continuous control benchmarks, thereby achieving both theoretical rigor and practical efficacy.
📝 Abstract
We study policy gradient methods for continuous-action, entropy-regularized reinforcement learning through the lens of Wasserstein geometry. Starting from a Wasserstein proximal update, we derive Wasserstein Proximal Policy Gradient (WPPG) via an operator-splitting scheme that alternates an optimal transport update with a heat step implemented by Gaussian convolution. This formulation avoids evaluating the policy's log density or its gradient, making the method directly applicable to expressive implicit stochastic policies specified as pushforward maps. We establish a global linear convergence rate for WPPG, covering both exact policy evaluation and actor-critic implementations with controlled approximation error. Empirically, WPPG is simple to implement and attains competitive performance on standard continuous-control benchmarks.