๐ค AI Summary
This paper addresses the computational bottleneck of variance-reduced stochastic optimization algorithms (e.g., SVRG, SARAH) in large-scale machine learningโtheir reliance on expensive full-gradient evaluations. We propose a novel, full-gradient-free variance reduction method that integrates random reshuffling with the gradient caching mechanism of SAG/SAGA, augmented by recursive gradient updates and a new analytical framework for variance control without full gradients. Theoretically, our method achieves the same convergence rate as classical reshuffling in non-convex settings and, for the first time among full-gradient-free methods, attains a superior rate in strongly convex settings. Empirically, it accelerates training by 30โ50% on large-scale datasets while reducing memory overhead by 90%. Our key contribution is the first full-gradient-free algorithm achieving SVRG-/SARAH-level variance reduction, thereby eliminating the need for periodic full-gradient computations entirely.
๐ Abstract
In today's world, machine learning is hard to imagine without large training datasets and models. This has led to the use of stochastic methods for training, such as stochastic gradient descent (SGD). SGD provides weak theoretical guarantees of convergence, but there are modifications, such as Stochastic Variance Reduced Gradient (SVRG) and StochAstic Recursive grAdient algoritHm (SARAH), that can reduce the variance. These methods require the computation of the full gradient occasionally, which can be time consuming. In this paper, we explore variants of variance reduction algorithms that eliminate the need for full gradient computations. To make our approach memory-efficient and avoid full gradient computations, we use two key techniques: the shuffling heuristic and idea of SAG/SAGA methods. As a result, we improve existing estimates for variance reduction algorithms without the full gradient computations. Additionally, for the non-convex objective function, our estimate matches that of classic shuffling methods, while for the strongly convex one, it is an improvement. We conduct comprehensive theoretical analysis and provide extensive experimental results to validate the efficiency and practicality of our methods for large-scale machine learning problems.