🤖 AI Summary
This work addresses the challenge of jointly leveraging offline data and enabling online policy exploration. We propose PGDA-RL, the first asynchronous primal-dual reinforcement learning framework that requires no simulator, no fixed behavior policy, and converges from a single trajectory. Methodologically, we formulate the regularized MDP as a linear program constrained by occupancy measure duality, solved via a two-timescale projected gradient ascent–descent algorithm. Gradient estimates are obtained via experience replay, and an online dual-variable guidance mechanism enables real-time policy updates. Theoretically, under mild assumptions, PGDA-RL converges almost surely to the optimal value function and policy. Compared to existing primal-dual RL methods, it significantly relaxes stringent requirements on environment interaction structure—such as periodic restarts or strong mixing—and improves both offline data efficiency and online exploration robustness.
📝 Abstract
We study reinforcement learning by combining recent advances in regularized linear programming formulations with the classical theory of stochastic approximation. Motivated by the challenge of designing algorithms that leverage off-policy data while maintaining on-policy exploration, we propose PGDA-RL, a novel primal-dual Projected Gradient Descent-Ascent algorithm for solving regularized Markov Decision Processes (MDPs). PGDA-RL integrates experience replay-based gradient estimation with a two-timescale decomposition of the underlying nested optimization problem. The algorithm operates asynchronously, interacts with the environment through a single trajectory of correlated data, and updates its policy online in response to the dual variable associated with the occupation measure of the underlying MDP. We prove that PGDA-RL converges almost surely to the optimal value function and policy of the regularized MDP. Our convergence analysis relies on tools from stochastic approximation theory and holds under weaker assumptions than those required by existing primal-dual RL approaches, notably removing the need for a simulator or a fixed behavioral policy.