Stochastic Approximation with Unbounded Markovian Noise: A General-Purpose Theorem

📅 2024-10-29
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This paper investigates the finite-time convergence of stochastic approximation algorithms under unbounded Markovian noise, with applications to average-reward reinforcement learning (including Q-learning), temporal-difference (TD) learning, and distributed stochastic optimization. To overcome the limitations of classical analyses—typically requiring i.i.d. or bounded noise—we propose the first general convergence theorem applicable to unbounded Markovian noise, grounded in Lyapunov stability theory and linear function approximation. Our key contributions are: (1) the first sample-complexity analysis for unbounded Markovian noise, achieving the optimal (O(1/varepsilon^2)) rate; (2) a significantly tightened error bound for Q-learning, broadening the class of admissible behavior policies; and (3) the first finite-time convergence guarantee for cyclic block-coordinate descent–based distributed optimization, particularly effective for high-dimensional strongly convex problems.

Technology Category

Application Category

📝 Abstract
Motivated by engineering applications such as resource allocation in networks and inventory systems, we consider average-reward Reinforcement Learning with unbounded state space and reward function. Recent works studied this problem in the actor-critic framework and established finite sample bounds assuming access to a critic with certain error guarantees. We complement their work by studying Temporal Difference (TD) learning with linear function approximation and establishing finite-time bounds with the optimal $mathcal{O}left(1/epsilon^2 ight)$ sample complexity. These results are obtained using the following general-purpose theorem for non-linear Stochastic Approximation (SA). Suppose that one constructs a Lyapunov function for a non-linear SA with certain drift condition. Then, our theorem establishes finite-time bounds when this SA is driven by unbounded Markovian noise under suitable conditions. It serves as a black box tool to generalize sample guarantees on SA from i.i.d. or martingale difference case to potentially unbounded Markovian noise. The generality and the mild assumption of the setup enables broad applicability of our theorem. We illustrate its power by studying two more systems: (i) We improve upon the finite-time bounds of $Q$-learning by tightening the error bounds and also allowing for a larger class of behavior policies. (ii) We establish the first ever finite-time bounds for distributed stochastic optimization of high-dimensional smooth strongly convex function using cyclic block coordinate descent.
Problem

Research questions and friction points this paper is trying to address.

Establishes finite-time bounds for TD learning with linear function approximation
Provides general theorem for Stochastic Approximation with unbounded Markovian noise
Improves Q-learning bounds and enables distributed stochastic optimization analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

TD learning with linear function approximation
Lyapunov function for non-linear stochastic approximation
Distributed optimization via cyclic coordinate descent
🔎 Similar Papers
No similar papers found.
S
Shaan ul Haque
H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
Siva Theja Maguluri
Siva Theja Maguluri
Georgia Institute of Technology
Applied ProbabilityOptimizationReinforcement LearningResource Allocation AlgorithmsNetworks