🤖 AI Summary
This work addresses a fundamental limitation in SGD convergence analysis—the reliance on hard-to-verify assumptions on stochastic gradient variance. For the first time, it establishes rigorous convergence guarantees *without any assumption on gradient variance*, leveraging only strong convexity and smoothness of the objective function. Methodologically, it constructs a novel monotonic Lyapunov energy function and integrates performance estimation problem (PEP) optimization with Lyapunov stability analysis to derive tight theoretical bounds. Key contributions are: (1) a breakthrough expansion of the admissible step-size range—more than double that of conventional variance-dependent bounds; (2) proof that the derived bias term is PEP-optimal and thus unimprovable; and (3) significantly enhanced robustness and verifiability of theoretical guarantees, markedly improving SGD’s interpretability and practical applicability under realistic, non-ideal noise conditions.
📝 Abstract
The analysis of Stochastic Gradient Descent (SGD) often relies on making some assumption on the variance of the stochastic gradients, which is usually not satisfied or difficult to verify in practice. This paper contributes to a recent line of works which attempt to provide guarantees without making any variance assumption, leveraging only the (strong) convexity and smoothness of the loss functions. In this context, we prove new theoretical bounds derived from the monotonicity of a simple Lyapunov energy, improving the current state-of-the-art and extending their validity to larger step-sizes. Our theoretical analysis is backed by a Performance Estimation Problem analysis, which allows us to claim that, empirically, the bias term in our bounds is tight within our framework.