Wasserstein Proximal Policy Gradient

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of applying entropy-regularized reinforcement learning to implicit stochastic policies in continuous action spaces, where conventional methods rely on explicit policy densities and their gradients. Leveraging a Wasserstein geometric perspective, the authors propose a novel policy optimization algorithm that alternates between optimal transport proximal updates and Gaussian convolution heat steps via operator splitting. This approach eliminates the need to compute the log-density of the policy or its gradients explicitly. Notably, it is the first to integrate Wasserstein proximal updates into a policy gradient framework, enabling the use of implicit policies defined through pushforward mappings. The method enjoys a theoretical guarantee of global linear convergence, features a simple implementation, and demonstrates strong empirical performance on standard continuous control benchmarks, thereby achieving both theoretical rigor and practical efficacy.

Technology Category

Application Category

📝 Abstract
We study policy gradient methods for continuous-action, entropy-regularized reinforcement learning through the lens of Wasserstein geometry. Starting from a Wasserstein proximal update, we derive Wasserstein Proximal Policy Gradient (WPPG) via an operator-splitting scheme that alternates an optimal transport update with a heat step implemented by Gaussian convolution. This formulation avoids evaluating the policy's log density or its gradient, making the method directly applicable to expressive implicit stochastic policies specified as pushforward maps. We establish a global linear convergence rate for WPPG, covering both exact policy evaluation and actor-critic implementations with controlled approximation error. Empirically, WPPG is simple to implement and attains competitive performance on standard continuous-control benchmarks.
Problem

Research questions and friction points this paper is trying to address.

policy gradient
continuous-action reinforcement learning
entropy regularization
implicit stochastic policies
Wasserstein geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wasserstein geometry
proximal policy gradient
implicit stochastic policies
optimal transport
operator splitting
🔎 Similar Papers
No similar papers found.
Z
Zhaoyu Zhu
Zhiyuan College, Shanghai Jiao Tong University, Shanghai 200240, China
S
Shuhan Zhang
School of Data Science, The Chinese University of Hong Kong, Shenzhen, Guangdong, China
Rui Gao
Rui Gao
Associate Professor, The University of Texas at Austin
Operations Research
Shuang Li
Shuang Li
Institute of semiconductors, Chinese Academy of Sciences
AIcomputer vision3D image processing