Why Policy Gradient Algorithms Work for Undiscounted Total-Reward MDPs

📅 2025-10-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing policy gradient theory for large language models lacks convergence guarantees under undiscounted (γ = 1) total reward settings, as conventional γ < 1 analysis frameworks fail in infinite-horizon MDPs—particularly due to divergence of state visitation measures. Method: We introduce a novel analytical paradigm grounded in Markov chain recurrence and transient classification invariance, replacing traditional state visitation measures with transient visitation measures, and rigorously prove the invariance of state classification under soft-maximum policies. Contribution/Results: This work establishes, for the first time, convergence guarantees for policy gradient algorithms under γ = 1, thereby filling a fundamental theoretical gap in undiscounted policy optimization. Our framework provides a rigorous foundation for reinforcement learning–based training of large language models, ensuring stability and theoretical soundness in infinite-horizon sequential decision-making.

Technology Category

Application Category

📝 Abstract
The classical policy gradient method is the theoretical and conceptual foundation of modern policy-based reinforcement learning (RL) algorithms. Most rigorous analyses of such methods, particularly those establishing convergence guarantees, assume a discount factor $γ< 1$. In contrast, however, a recent line of work on policy-based RL for large language models uses the undiscounted total-reward setting with $γ= 1$, rendering much of the existing theory inapplicable. In this paper, we provide analyses of the policy gradient method for undiscounted expected total-reward infinite-horizon MDPs based on two key insights: (i) the classification of the MDP states into recurrent and transient states is invariant over the set of policies that assign strictly positive probability to every action (as is typical in deep RL models employing a softmax output layer) and (ii) the classical state visitation measure (which may be ill-defined when $γ= 1$) can be replaced with a new object that we call the transient visitation measure.
Problem

Research questions and friction points this paper is trying to address.

Analyzing policy gradient convergence for undiscounted total-reward MDPs
Establishing theoretical guarantees when discount factor equals one
Developing new visitation measures for infinite-horizon reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes policy gradient for undiscounted total-reward MDPs
Classifies states into recurrent and transient categories
Introduces transient visitation measure to replace classical measure
🔎 Similar Papers
No similar papers found.
J
Jongmin Lee
Seoul National University Department of Mathematical Sciences
Ernest K. Ryu
Ernest K. Ryu
University of California, Los Angeles
Deep learning theoryConvex optimization