Can SGD Handle Heavy-Tailed Noise?

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the convergence of standard stochastic gradient descent (SGD) under heavy-tailed gradient noise—where gradients possess finite $p$-th moments for $1 < p leq 2$, but potentially unbounded variance. Addressing the limitation of existing analyses that assume bounded second moments, we develop a unified convergence framework applicable to convex, strongly convex, and nonconvex optimization. Our approach integrates projected SGD with mini-batch sampling, assumes Hölder-continuous gradients, and employs polynomially decaying step sizes. We establish tight convergence rates: $mathcal{O}(varepsilon^{-p/(p-1)})$ for convex objectives, $mathcal{O}(varepsilon^{-p/(2(p-1))})$ for strongly convex ones, and $mathcal{O}(varepsilon^{-2p/(p-1)})$ for nonconvex settings—matching the information-theoretic lower bound in the nonconvex case. These results demonstrate that SGD retains robustness and achieves optimal statistical efficiency even under heavy-tailed noise with infinite variance.

Technology Category

Application Category

📝 Abstract
Stochastic Gradient Descent (SGD) is a cornerstone of large-scale optimization, yet its theoretical behavior under heavy-tailed noise -- common in modern machine learning and reinforcement learning -- remains poorly understood. In this work, we rigorously investigate whether vanilla SGD, devoid of any adaptive modifications, can provably succeed under such adverse stochastic conditions. Assuming only that stochastic gradients have bounded $p$-th moments for some $p in (1, 2]$, we establish sharp convergence guarantees for (projected) SGD across convex, strongly convex, and non-convex problem classes. In particular, we show that SGD achieves minimax optimal sample complexity under minimal assumptions in the convex and strongly convex regimes: $mathcal{O}(varepsilon^{-frac{p}{p-1}})$ and $mathcal{O}(varepsilon^{-frac{p}{2(p-1)}})$, respectively. For non-convex objectives under Hölder smoothness, we prove convergence to a stationary point with rate $mathcal{O}(varepsilon^{-frac{2p}{p-1}})$, and complement this with a matching lower bound specific to SGD with arbitrary polynomial step-size schedules. Finally, we consider non-convex Mini-batch SGD under standard smoothness and bounded central moment assumptions, and show that it also achieves a comparable $mathcal{O}(varepsilon^{-frac{2p}{p-1}})$ sample complexity with a potential improvement in the smoothness constant. These results challenge the prevailing view that heavy-tailed noise renders SGD ineffective, and establish vanilla SGD as a robust and theoretically principled baseline -- even in regimes where the variance is unbounded.
Problem

Research questions and friction points this paper is trying to address.

Can vanilla SGD handle heavy-tailed noise effectively?
Establishes convergence guarantees for SGD under bounded p-th moments.
Challenges the view that heavy-tailed noise makes SGD ineffective.
Innovation

Methods, ideas, or system contributions that make the work stand out.

SGD with bounded p-th moments analysis
Minimax optimal sample complexity proofs
Convergence guarantees for non-convex objectives
🔎 Similar Papers
No similar papers found.