Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems

📅 2023-01-08
🏛️ Mathematics of Operations Research
📈 Citations: 8
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses large-scale nonlinear equation (root-finding) problems. We propose two stochastic block-coordinate optimistic gradient algorithms—non-accelerated and accelerated versions—guaranteeing convergence under weak Minty solution and co-coercivity assumptions, respectively. Our contributions are threefold: (i) the first integration of the optimistic gradient mechanism with stochastic block-coordinate updates; (ii) the first accelerated block-coordinate root-finding algorithm with almost-sure convergence guarantees; and (iii) an extension to finite-sum inclusion problems, yielding a novel federated learning solver. Theoretically, we establish optimal iteration complexities of $O(1/k)$ for the non-accelerated variant and $O(1/k^2)$ for the accelerated one, along with almost-sure convergence and an almost-sure convergence rate of $O(1/sqrt{k})$. Extensive experiments on synthetic and real-world datasets demonstrate that our methods significantly outperform existing state-of-the-art algorithms.
📝 Abstract
In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are known as root-finding problems. Our first algorithm is nonaccelerated with constant step sizes and achieves a [Formula: see text] best-iterate convergence rate on [Formula: see text] when the underlying operator G is Lipschitz continuous and possesses a weak Minty solution, in which [Formula: see text] is the expectation and k is the iteration counter. Our second method is a new accelerated randomized block-coordinate optimistic gradient algorithm. We establish both [Formula: see text] and [Formula: see text] last-iterate convergence rates on both [Formula: see text] and [Formula: see text] for this algorithm under the co-coercivity of G. In addition, we prove that the iterate sequence [Formula: see text] converges to a solution almost surely and [Formula: see text] attains a [Formula: see text] almost sure convergence rate. Then, we apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization, especially in federated learning. We obtain two new federated learning–type algorithms and their convergence rate guarantees for solving this problem class. We test our algorithms on four numerical examples using both synthetic and real data and compare them with related methods. Our numerical experiments show some promising performance of the proposed methods against their competitors. Funding: This work was supported by the National Science Foundation (NSF) [Grant NSF-RTG DMS-2134107] and the Office of Naval Research [Grants N00014-20-1-2088, N00014-23-1-2588].
Problem

Research questions and friction points this paper is trying to address.

Develop randomized block-coordinate algorithms for large-scale root-finding problems
Achieve accelerated convergence rates for nonlinear equations under specific conditions
Apply methods to federated learning and large-scale optimization applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Randomized block-coordinate optimistic gradient algorithms
Non-accelerated with constant stepsizes
Accelerated with improved convergence rates
🔎 Similar Papers
No similar papers found.
Quoc Tran-Dinh
Quoc Tran-Dinh
Department of Statistics and Operations Research, UNC
convex optimizationnonlinear programmingoptimization for machine learning
Y
Yang Luo
Department of Statistics and Operations Research, The University of North Carolina at Chapel Hill (UNC), 318-Hanes Hall, Chapel Hill, NC27599-3260, USA.