Convergence for Natural Policy Gradient on Infinite-State Queueing MDPs

📅 2024-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing natural policy gradient (NPG) convergence analyses are restricted to finite-state Markov decision processes (MDPs), limiting theoretical justification for applications in infinite-state stochastic systems such as queueing networks. Method: We develop a state-dependent upper bound on the relative value function as the central analytical tool, integrating queueing-theoretic modeling with structured MDP analysis. Starting from the MaxWeight policy, we analyze NPG under the average-reward criterion. Contribution/Results: We establish the first $O(1/sqrt{T})$ convergence rate for NPG in countably infinite-state, average-reward MDPs—specifically demonstrated on canonical queueing MDPs. This result breaks the long-standing finite-state assumption and extends to general countable-state MDPs satisfying mild structural conditions (e.g., uniform ergodicity and bounded relative value differences). Our analysis provides the first rigorous convergence guarantee for NPG in complex stochastic dynamical systems, enabling theoretically grounded application to large-scale or infinite-state control problems.

Technology Category

Application Category

📝 Abstract
A wide variety of queueing systems can be naturally modeled as infinite-state Markov Decision Processes (MDPs). In the reinforcement learning (RL) context, a variety of algorithms have been developed to learn and optimize these MDPs. At the heart of many popular policy-gradient based learning algorithms, such as natural actor-critic, TRPO, and PPO, lies the Natural Policy Gradient (NPG) policy optimization algorithm. Convergence results for these RL algorithms rest on convergence results for the NPG algorithm. However, all existing results on the convergence of the NPG algorithm are limited to finite-state settings. We study a general class of queueing MDPs, and prove a $O(1/sqrt{T})$ convergence rate for the NPG algorithm, if the NPG algorithm is initialized with the MaxWeight policy. This is the first convergence rate bound for the NPG algorithm for a general class of infinite-state average-reward MDPs. Moreover, our result applies to a beyond the queueing setting to any countably-infinite MDP satisfying certain mild structural assumptions, given a sufficiently good initial policy. Key to our result are state-dependent bounds on the relative value function achieved by the iterate policies of the NPG algorithm.
Problem

Research questions and friction points this paper is trying to address.

Convergence of Natural Policy Gradient in infinite-state MDPs
First convergence rate bound for NPG in queueing systems
Extends NPG results beyond finite-state to infinite-state MDPs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Natural Policy Gradient for infinite-state MDPs
Convergence rate O(1/√T) with MaxWeight initialization
State-dependent bounds on relative value function
🔎 Similar Papers
No similar papers found.