High-Order Error Bounds for Markovian LSA with Richardson-Romberg Extrapolation

📅 2025-08-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the bias and higher-order error bounds of constant-step-size linear stochastic approximation (LSA) under Markovian noise. We show that Polyak–Ruppert (PR) averaging fails to eliminate the dominant $O(alpha)$ bias term. To resolve this, we introduce a novel analytical framework based on linearization decomposition and, for the first time, incorporate Richardson–Romberg (RR) extrapolation into LSA to systematically cancel the first-order bias. Through rigorous higher-order moment analysis, we derive tight moment bounds for the RR-iterated estimator and prove that its asymptotic variance matches the optimal covariance matrix of the original LSA, while its bias is reduced to $O(alpha^2)$. This constitutes the first higher-order bias correction for LSA under non-i.i.d. (Markovian) noise, significantly improving both convergence accuracy and statistical efficiency.

Technology Category

Application Category

📝 Abstract
In this paper, we study the bias and high-order error bounds of the Linear Stochastic Approximation (LSA) algorithm with Polyak-Ruppert (PR) averaging under Markovian noise. We focus on the version of the algorithm with constant step size $α$ and propose a novel decomposition of the bias via a linearization technique. We analyze the structure of the bias and show that the leading-order term is linear in $α$ and cannot be eliminated by PR averaging. To address this, we apply the Richardson-Romberg (RR) extrapolation procedure, which effectively cancels the leading bias term. We derive high-order moment bounds for the RR iterates and show that the leading error term aligns with the asymptotically optimal covariance matrix of the vanilla averaged LSA iterates.
Problem

Research questions and friction points this paper is trying to address.

Analyze bias in Linear Stochastic Approximation with Markovian noise
Cancel leading bias term using Richardson-Romberg extrapolation
Derive high-order error bounds for improved LSA algorithm
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linearization technique for bias decomposition
Richardson-Romberg extrapolation cancels bias
High-order moment bounds for RR iterates
🔎 Similar Papers
No similar papers found.