Relative Entropy Pathwise Policy Optimization

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Online policy optimization suffers from instability due to high-variance policy gradients and difficulties in training action-conditioned value functions. Method: This paper proposes REPPO—a pure on-policy reinforcement learning algorithm that employs value-gradient-driven, path-wise updates. It is the first to realize path-wise gradient estimation exclusively from pure on-policy data, integrating score-function gradient estimation, differentiable action-conditioned value function modeling, and relative entropy–constrained regularization. Contribution/Results: REPPO achieves superior sample efficiency, training stability, and hyperparameter robustness while maintaining low memory overhead and supporting GPU-accelerated parallel training. On standard benchmarks, it significantly reduces sample complexity, training time, and memory consumption, outperforming state-of-the-art on-policy methods. Its design makes it suitable for applications including game playing, robotic decision-making, and online fine-tuning of large language models.

Technology Category

Application Category

📝 Abstract
Score-function policy gradients have delivered strong results in game-playing, robotics and language-model fine-tuning. Yet its high-variance often undermines training stability. On the other hand, pathwise policy gradients alleviate the training variance, but are reliable only when driven by an accurate action-conditioned value function which is notoriously hard to train without relying on past off-policy data. In this paper, we discuss how to construct a value-gradient driven, on-policy algorithm that allow training Q-value models purely from on-policy data, unlocking the possibility of using pathwise policy updates in the context of on-policy learning. We show how to balance stochastic policies for exploration with constrained policy updates for stable training, and evaluate important architectural components that facilitate accurate value function learning. Building on these insights, we propose Relative Entropy Pathwise Policy Optimization (REPPO), an efficient on-policy algorithm that combines the sample-efficiency of pathwise policy gradients with the simplicity and minimal memory footprint of standard on-policy learning. We demonstrate that REPPO provides strong empirical performance at decreased sample requirements, wall-clock time, memory footprint as well as high hyperparameter robustness in a set of experiments on two standard GPU-parallelized benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Reduce high-variance in score-function policy gradients
Train accurate action-conditioned value functions on-policy
Balance exploration and stability in policy updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

On-policy algorithm for pathwise policy updates
Balances exploration with constrained policy updates
Combines pathwise gradients with minimal memory footprint
🔎 Similar Papers
No similar papers found.