🤖 AI Summary
This work addresses the limitation of stochastic proximal algorithms in composite convex optimization, which typically rely on the bounded variance assumption. Focusing on problems with a smooth loss plus a nonsmooth regularizer, the study establishes last-iterate convergence guarantees for both stochastic proximal gradient and stochastic incremental proximal methods. Under the mild assumptions that component functions are convex and smooth—without requiring bounded variance—the authors prove for the first time that both algorithms achieve a near-optimal $\widetilde{O}(1/\sqrt{T})$ convergence rate in the last iterate. This result is directly applicable to practical settings such as graph-guided regularization and extends naturally to structured optimization problems arising in multi-task learning and federated learning.
📝 Abstract
We analyze two classical algorithms for solving additively composite convex optimization problems where the objective is the sum of a smooth term and a nonsmooth regularizer: proximal stochastic gradient method for a single regularizer; and the randomized incremental proximal method, which uses the proximal operator of a randomly selected function when the regularizer is given as the sum of many nonsmooth functions. We focus on relaxing the bounded variance assumption that is common, yet stringent, for getting last iterate convergence rates. We prove the $\widetilde{O}(1/\sqrt{T})$ rate of convergence for the last iterate of both algorithms under componentwise convexity and smoothness, which is optimal up to log terms. Our results apply directly to graph-guided regularizers that arise in multi-task and federated learning, where the regularizer decomposes as a sum over edges of a collaboration graph.