🤖 AI Summary
To address the efficiency bottleneck in distributed learning over undirected networks—caused by high communication overhead and large-scale data—this paper proposes a novel distributed ADMM framework integrating multi-step local updates with stochastic gradients, and introduces, for the first time, a variance-reduction mechanism inspired by SAGA and SPIDER. The method unifies treatment of both convex and nonconvex optimization: theoretically, it converges to the global optimum (convex) or a stationary point (nonconvex), while its variance-reduced variant achieves exact convergence; moreover, local training is rigorously shown to accelerate convergence rates. Experiments demonstrate substantial reductions in communication rounds and marked improvements in convergence speed compared to baseline methods, achieving a favorable trade-off between efficiency and accuracy.
📝 Abstract
We address distributed learning problems, both nonconvex and convex, over undirected networks. In particular, we design a novel algorithm based on the distributed Alternating Direction Method of Multipliers (ADMM) to address the challenges of high communication costs, and large datasets. Our design tackles these challenges i) by enabling the agents to perform multiple local training steps between each round of communications; and ii) by allowing the agents to employ stochastic gradients while carrying out local computations. We show that the proposed algorithm converges to a neighborhood of a stationary point, for nonconvex problems, and of an optimal point, for convex problems. We also propose a variant of the algorithm to incorporate variance reduction thus achieving exact convergence. We show that the resulting algorithm indeed converges to a stationary (or optimal) point, and moreover that local training accelerates convergence. We thoroughly compare the proposed algorithms with the state of the art, both theoretically and through numerical results.