🤖 AI Summary
Existing natural policy gradient (NPG) convergence analyses are restricted to finite-state Markov decision processes (MDPs), limiting theoretical justification for applications in infinite-state stochastic systems such as queueing networks.
Method: We develop a state-dependent upper bound on the relative value function as the central analytical tool, integrating queueing-theoretic modeling with structured MDP analysis. Starting from the MaxWeight policy, we analyze NPG under the average-reward criterion.
Contribution/Results: We establish the first $O(1/sqrt{T})$ convergence rate for NPG in countably infinite-state, average-reward MDPs—specifically demonstrated on canonical queueing MDPs. This result breaks the long-standing finite-state assumption and extends to general countable-state MDPs satisfying mild structural conditions (e.g., uniform ergodicity and bounded relative value differences). Our analysis provides the first rigorous convergence guarantee for NPG in complex stochastic dynamical systems, enabling theoretically grounded application to large-scale or infinite-state control problems.
📝 Abstract
A wide variety of queueing systems can be naturally modeled as infinite-state Markov Decision Processes (MDPs). In the reinforcement learning (RL) context, a variety of algorithms have been developed to learn and optimize these MDPs. At the heart of many popular policy-gradient based learning algorithms, such as natural actor-critic, TRPO, and PPO, lies the Natural Policy Gradient (NPG) policy optimization algorithm. Convergence results for these RL algorithms rest on convergence results for the NPG algorithm. However, all existing results on the convergence of the NPG algorithm are limited to finite-state settings. We study a general class of queueing MDPs, and prove a $O(1/sqrt{T})$ convergence rate for the NPG algorithm, if the NPG algorithm is initialized with the MaxWeight policy. This is the first convergence rate bound for the NPG algorithm for a general class of infinite-state average-reward MDPs. Moreover, our result applies to a beyond the queueing setting to any countably-infinite MDP satisfying certain mild structural assumptions, given a sufficiently good initial policy. Key to our result are state-dependent bounds on the relative value function achieved by the iterate policies of the NPG algorithm.