🤖 AI Summary
This paper studies value iteration (VI) for multi-chain Markov decision processes (MDPs) under the average-reward criterion, addressing the dual challenge of *navigation*—efficiently guiding policies into an optimal recurrent class—and *optimization*—achieving long-term performance optimality within it. To this end, we propose a novel fixed-point analysis framework grounded in the theory of nonexpansive operators on Banach spaces. This framework establishes, for the first time, a precise error-mapping relationship between average-reward and discounted MDPs. We derive a sublinear convergence bound for discounted-value-function errors and provide a refined suboptimality decomposition for multi-chain MDPs. Theoretical analysis demonstrates that our approach substantially accelerates VI convergence, yielding the tightest known complexity characterization to date. Moreover, it deepens understanding of the interplay between navigation and optimization mechanisms inherent in multi-chain structures.
📝 Abstract
We study value-iteration (VI) algorithms for solving general (a.k.a. multichain) Markov decision processes (MDPs) under the average-reward criterion, a fundamental but theoretically challenging setting. Beyond the difficulties inherent to all average-reward problems posed by the lack of contractivity and non-uniqueness of solutions to the Bellman operator, in the multichain setting an optimal policy must solve the navigation subproblem of steering towards the best connected component, in addition to optimizing long-run performance within each component. We develop algorithms which better solve this navigational subproblem in order to achieve faster convergence for multichain MDPs, obtaining improved rates of convergence and sharper measures of complexity relative to prior work. Many key components of our results are of potential independent interest, including novel connections between average-reward and discounted problems, optimal fixed-point methods for discounted VI which extend to general Banach spaces, new sublinear convergence rates for the discounted value error, and refined suboptimality decompositions for multichain MDPs. Overall our results yield faster convergence rates for discounted and average-reward problems and expand the theoretical foundations of VI approaches.