๐ค AI Summary
To address the joint optimization of computational and communication efficiency in distributed learning over undirected networks, this paper proposes a novel distributed optimization algorithm within the ADMM framework. The method uniquely integrates stochastic gradient estimation, multiple local updates, and gradient compression into a single ADMM formulation, while supporting synchronous communication. Theoretically, under strong convexity and smoothness of the objective function, the algorithm achieves exact linear convergenceโa first such guarantee for ADMM-based methods under coupled gradient compression and local updates. Empirical evaluations on image classification tasks demonstrate that the proposed method significantly outperforms state-of-the-art distributed SGD, FedADMM, and communication-compression approaches, achieving superior trade-offs among convergence speed, communication overhead, and computational load.
๐ Abstract
We address distributed learning problems over undirected networks. Specifically, we focus on designing a novel ADMM-based algorithm that is jointly computation- and communication-efficient. Our design guarantees computational efficiency by allowing agents to use stochastic gradients during local training. Moreover, communication efficiency is achieved as follows: i) the agents perform multiple training epochs between communication rounds, and ii) compressed transmissions are used. We prove exact linear convergence of the algorithm in the strongly convex setting. We corroborate our theoretical results by numerical comparisons with state of the art techniques on a classification task.