Tight Finite Time Bounds of Two-Time-Scale Linear Stochastic Approximation with Markovian Noise

📅 2023-12-31
🏛️ arXiv.org
📈 Citations: 11
Influential: 6
📄 PDF
🤖 AI Summary
This work addresses the long-standing challenge of establishing finite-time convergence guarantees for GTD-class off-policy reinforcement learning algorithms (e.g., TDC, GTD2) under Markovian noise. We derive the first tight finite-time mean-square error bound for two-timescale linear stochastic approximation with Markovian noise. Methodologically, we integrate Lyapunov function analysis, matrix perturbation theory, and spectral analysis of Markov chains to precisely characterize the coupled dynamics of the two-timescale iterates. Our key contribution is an error upper bound whose dominant term is $mathrm{trace}(Sigma^y)/k$, which exactly matches the asymptotic covariance from the central limit theorem—thereby unifying finite-time bounds with asymptotic distributional characterization. Furthermore, we provide the first sample-complexity-optimal guarantees for TDC, GTD, GTD2, and Polyak–Ruppert averaged TD algorithms under Markovian sampling.

Technology Category

Application Category

📝 Abstract
Stochastic approximation (SA) is an iterative algorithm for finding the fixed point of an operator using noisy samples and widely used in optimization and Reinforcement Learning (RL). The noise in RL exhibits a Markovian structure, and in some cases, such as gradient temporal difference (GTD) methods, SA is employed in a two-time-scale framework. This combination introduces significant theoretical challenges for analysis. We derive an upper bound on the error for the iterations of linear two-time-scale SA with Markovian noise. We demonstrate that the mean squared error decreases as $trace (Sigma^y)/k + o(1/k)$ where $k$ is the number of iterates, and $Sigma^y$ is an appropriately defined covariance matrix. A key feature of our bounds is that the leading term, $Sigma^y$, exactly matches with the covariance in the Central Limit Theorem (CLT) for the two-time-scale SA, and we call them tight finite-time bounds. We illustrate their use in RL by establishing sample complexity for off-policy algorithms, TDC, GTD, and GTD2. A special case of linear two-time-scale SA that is extensively studied is linear SA with Polyak-Ruppert averaging. We present tight finite time bounds corresponding to the covariance matrix of the CLT. Such bounds can be used to study TD-learning with Polyak-Ruppert averaging.
Problem

Research questions and friction points this paper is trying to address.

Analyzes error bounds for two-time-scale linear stochastic approximation with Markovian noise
Establishes tight finite-time convergence rates matching Central Limit Theorem covariance
Applies theory to reinforcement learning algorithms (TDC, GTD, GTD2) sample complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tight finite-time bounds for two-time-scale SA
MSE decreases as trace(Σ^y)/k + o(1/k)
Bounds match Central Limit Theorem covariance
🔎 Similar Papers
No similar papers found.
S
Shaan ul Haque
H. Milton Stewart School of Industrial & Systems Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
S
S. Khodadadian
Grado Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, 24061, USA
Siva Theja Maguluri
Siva Theja Maguluri
Georgia Institute of Technology
Applied ProbabilityOptimizationReinforcement LearningResource Allocation AlgorithmsNetworks