Quantitative Error Bounds for Scaling Limits of Stochastic Iterative Algorithms

📅 2025-01-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the non-asymptotic pathwise approximation accuracy of stochastic iterative algorithms—such as SGD and SGLD—to the Ornstein–Uhlenbeck process in the univariate setting. Addressing the lack of quantifiable, path-level error bounds in existing theory, we introduce a novel analytical framework for path space by integrating infinite-dimensional Stein’s method with exchangeable pair techniques. This yields explicit convergence rates under both the Lévy–Prokhorov metric and the bounded Wasserstein distance, delivering tight, non-asymptotic upper bounds on pathwise approximation error. We rigorously establish weak convergence and provide quantitative error control for both the iterates’ mean and variance. The framework thus furnishes a foundational toolset for extending the analysis to multivariate settings and more complex stochastic optimization algorithms.

Technology Category

Application Category

📝 Abstract
Stochastic iterative algorithms, including stochastic gradient descent (SGD) and stochastic gradient Langevin dynamics (SGLD), are widely utilized for optimization and sampling in large-scale and high-dimensional problems in machine learning, statistics, and engineering. Numerous works have bounded the parameter error in, and characterized the uncertainty of, these approximations. One common approach has been to use scaling limit analyses to relate the distribution of algorithm sample paths to a continuous-time stochastic process approximation, particularly in asymptotic setups. Focusing on the univariate setting, in this paper, we build on previous work to derive non-asymptotic functional approximation error bounds between the algorithm sample paths and the Ornstein-Uhlenbeck approximation using an infinite-dimensional version of Stein's method of exchangeable pairs. We show that this bound implies weak convergence under modest additional assumptions and leads to a bound on the error of the variance of the iterate averages of the algorithm. Furthermore, we use our main result to construct error bounds in terms of two common metrics: the L'{e}vy-Prokhorov and bounded Wasserstein distances. Our results provide a foundation for developing similar error bounds for the multivariate setting and for more sophisticated stochastic approximation algorithms.
Problem

Research questions and friction points this paper is trying to address.

Randomized Iterative Algorithms
Accuracy and Uncertainty
Large and Complex Problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mathematical Technique
Error Quantification
Ideal Model Approximation
🔎 Similar Papers
No similar papers found.
X
Xiaoyu Wang
Department of Mathematics & Statistics, Boston University
M
Mikolaj J. Kasprzak
Department of Information Systems, Data Analytics & Operations, ESSEC Business School
Jeffrey Negrea
Jeffrey Negrea
University of Waterloo
StatisticsOnline learningMachine LearningApplied Probability
S
Solesne Bourguin
Department of Mathematics & Statistics, Boston University
J
Jonathan H. Huggins
Department of Mathematics & Statistics, Boston University; Faculty of Computing & Data Sciences, Boston University