🤖 AI Summary
This paper addresses the efficient solution of finite-sum co-coercive equations $Gx = 0$, where $G = sum_{i=1}^n G_i$. We propose the first single-loop, variance-reduced accelerated Krasnosel’skii–Mann algorithm, compatible with unbiased stochastic estimators such as SVRG and SAGA, and embedded within a generalized fixed-point iteration framework. Theoretically, we establish—*for the first time in root-finding*—$O(1/k^2)$ and $o(1/k^2)$ convergence rates for the *last iterate*, along with almost-sure convergence; under strong quasi-monotonicity, we obtain linear convergence; and the oracle complexity is $O(n + n^{2/3}varepsilon^{-1})$. Experiments demonstrate substantial improvements over existing stochastic and deterministic methods.
📝 Abstract
We propose a new class of fast Krasnoselkii--Mann methods with variance reduction to solve a finite-sum co-coercive equation $Gx = 0$. Our algorithm is single-loop and leverages a new family of unbiased variance-reduced estimators specifically designed for a wider class of root-finding algorithms. Our method achieves both $mathcal{O}(1/k^2)$ and $o(1/k^2)$ last-iterate convergence rates in terms of $mathbb{E}[| Gx^k|^2]$, where $k$ is the iteration counter and $mathbb{E}[cdot]$ is the total expectation. We also establish almost sure $o(1/k^2)$ convergence rates and the almost sure convergence of iterates ${x^k}$ to a solution of $Gx=0$. We instantiate our framework for two prominent estimators: SVRG and SAGA. By an appropriate choice of parameters, both variants attain an oracle complexity of $mathcal{O}(n + n^{2/3}epsilon^{-1})$ to reach an $epsilon$-solution, where $n$ represents the number of summands in the finite-sum operator $G$. Furthermore, under $sigma$-strong quasi-monotonicity, our method achieves a linear convergence rate and an oracle complexity of $mathcal{O}(n+ max{n, n^{2/3}kappa} log(frac{1}{epsilon}))$, where $kappa := L/sigma$. We extend our approach to solve a class of finite-sum inclusions (possibly nonmonotone), demonstrating that our schemes retain the same theoretical guarantees as in the equation setting. Finally, numerical experiments validate our algorithms and demonstrate their promising performance compared to state-of-the-art methods.