π€ AI Summary
This paper addresses the lack of reliable termination criteria and solution verification mechanisms for policy gradient methods in finite-state-action Markov decision processes (MDPs). We propose the **advantage gap function**, a novel, efficiently computable termination criterion. It establishes, for the first time, **strong polynomial-time convergence guarantees** for policy gradient methods and provides **per-state optimality certificates**βenabling verification of whether the current policy is optimal at each state without requiring prior knowledge of optimal policies or external benchmarks. Furthermore, by integrating adaptive step sizes and refined stochastic gradient analysis, our algorithm achieves a **stationary-distribution-independent linear convergence rate**, and **sublinear per-state convergence** under stochastic sampling. This constitutes the first policy gradient termination and verification framework that simultaneously ensures computational tractability, theoretical rigor, and practical verifiability.
π Abstract
This paper proposes a novel termination criterion, termed the advantage gap function, for finite state and action Markov decision processes (MDP) and reinforcement learning (RL). By incorporating this advantage gap function into the design of step size rules and deriving a new linear rate of convergence that is independent of the stationary state distribution of the optimal policy, we demonstrate that policy gradient methods can solve MDPs in strongly-polynomial time. To the best of our knowledge, this is the first time that such strong convergence properties have been established for policy gradient methods. Moreover, in the stochastic setting, where only stochastic estimates of policy gradients are available, we show that the advantage gap function provides close approximations of the optimality gap for each individual state and exhibits a sublinear rate of convergence at every state. The advantage gap function can be easily estimated in the stochastic case, and when coupled with easily computable upper bounds on policy values, they provide a convenient way to validate the solutions generated by policy gradient methods. Therefore, our developments offer a principled and computable measure of optimality for RL, whereas current practice tends to rely on algorithm-to-algorithm or baselines comparisons with no certificate of optimality.