π€ AI Summary
This work addresses the absence of non-asymptotic, explicit Gaussian approximation error bounds for the stationary distribution of constant-stepsize stochastic approximation algorithms. It establishes, for the first time, explicit non-asymptotic Wasserstein distance bounds between the stationary distribution and a Gaussian distribution, covering both i.i.d. and Markovian noise settings, and derives BerryβEsseen-type bounds for tail probabilities. Notably, the analysis reveals that in non-strongly convex stochastic gradient descent (SGD), the stationary distribution converges to a non-Gaussian Gibbs-type limiting law rather than a Gaussian one. The theoretical framework yields error bounds of order $O(\alpha^{1/2}\log(1/\alpha))$ for SGD, linear, and compressed nonlinear stochastic approximation schemes, and numerical experiments corroborate the validity of the Gibbs limiting distribution.
π Abstract
Constant-stepsize stochastic approximation (SA) is widely used in learning for computational efficiency. For a fixed stepsize, the iterates typically admit a stationary distribution that is rarely tractable. Prior work shows that as the stepsize $\alpha \downarrow 0$, the centered-and-scaled steady state converges weakly to a Gaussian random vector. However, for fixed $\alpha$, this weak convergence offers no usable error bound for approximating the steady-state by its Gaussian limit. This paper provides explicit, non-asymptotic error bounds for fixed $\alpha$. We first prove general-purpose theorems that bound the Wasserstein distance between the centered-scaled steady state and an appropriate Gaussian distribution, under regularity conditions for drift and moment conditions for noise. To ensure broad applicability, we cover both i.i.d. and Markovian noise models. We then instantiate these theorems for three representative SA settings: (1) stochastic gradient descent (SGD) for smooth strongly convex objectives, (2) linear SA, and (3) contractive nonlinear SA. We obtain dimension- and stepsize-dependent, explicit bounds in Wasserstein distance of order $\alpha^{1/2}\log(1/\alpha)$ for small $\alpha$. Building on the Wasserstein approximation error, we further derive non-uniform Berry--Esseen-type tail bounds that compare the steady-state tail probability to Gaussian tails. We achieve an explicit error term that decays in both the deviation level and stepsize $\alpha$. We adapt the same analysis for SGD beyond strongly convexity and study general convex objectives. We identify a non-Gaussian (Gibbs) limiting law under the correct scaling, which is validated numerically, and provide a corresponding pre-limit Wasserstein error bound.