Policy Gradient Algorithms in Average-Reward Multichain MDPs

📅 2026-02-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing policy gradient methods lack theoretical guarantees in average-reward multi-chain Markov decision processes (MDPs), particularly suffering from insufficient convergence and sample complexity analyses under non-ergodic, multi-chain structures. This work establishes the first policy gradient theoretical framework for average-reward multi-chain MDPs by leveraging the invariance of recurrent and transient state classifications. Building on this foundation, the authors propose the α-truncated policy mirror ascent algorithm, which provably converges to an ε-optimal policy within a finite sample complexity while operating entirely within the space of positive policies. This approach overcomes the longstanding limitation of prior methods that were restricted to single-chain or ergodic MDPs, thereby substantially advancing the theoretical foundations of average-reward reinforcement learning in multi-chain settings.

Technology Category

Application Category

📝 Abstract
While there is an extensive body of research analyzing policy gradient methods for discounted cumulative-reward MDPs, prior work on policy gradient methods for average-reward MDPs has been limited, with most existing results restricted to ergodic or unichain settings. In this work, we first establish a policy gradient theorem for average-reward multichain MDPs based on the invariance of the classification of recurrent and transient states. Building on this foundation, we develop refined analyses and obtain a collection of convergence and sample-complexity results that advance the understanding of this setting. In particular, we show that the proposed $\alpha$-clipped policy mirror ascent algorithm attains an $\epsilon$-optimal policy with respect to positive policies.
Problem

Research questions and friction points this paper is trying to address.

policy gradient
average-reward MDPs
multichain
recurrent states
transient states
Innovation

Methods, ideas, or system contributions that make the work stand out.

policy gradient
average-reward MDPs
multichain
α-clipped policy mirror ascent
sample complexity
🔎 Similar Papers
No similar papers found.