🤖 AI Summary
This work addresses the lack of high-probability convergence guarantees for stochastic subspace methods in non-convex optimization under heavy-tailed noise. To this end, we propose the Direction-Normalized Stochastic Subspace SGD (RS-NSGD) algorithm, which achieves both expected and high-probability convergence under the mild assumption that gradients possess bounded p-th moments. Notably, our method significantly reduces oracle complexity compared to existing approaches. As the first to establish high-probability convergence bounds for stochastic subspace SGD under sub-Gaussian noise, this paper further demonstrates that RS-NSGD attains superior oracle complexity relative to full-dimensional normalized SGD in heavy-tailed settings, thereby highlighting its theoretical and practical advantages.
📝 Abstract
Randomized subspace methods reduce per-iteration cost; however, in nonconvex optimization, most analyses are expectation-based, and high-probability bounds remain scarce even under sub-Gaussian noise. We first prove that randomized subspace SGD (RS-SGD) admits a high-probability convergence bound under sub-Gaussian noise, achieving the same order of oracle complexity as prior in-expectation results. Motivated by the prevalence of heavy-tailed gradients in modern machine learning, we then propose randomized subspace normalized SGD (RS-NSGD), which integrates direction normalization into subspace updates. Assuming the noise has bounded $p$-th moments, we establish both in-expectation and high-probability convergence guarantees, and show that RS-NSGD can achieve better oracle complexity than full-dimensional normalized SGD.