🤖 AI Summary
This paper addresses distributed optimization over undirected connected graphs without a central server, aiming to minimize a composite convex objective $f + r$, where $f$ is the average of local node losses and $r$ is a possibly nonsmooth regularizer—while accounting for practical challenges such as computational errors and inexact subproblem solutions. We propose DCatalyst, the first unified black-box acceleration framework for decentralized optimization, which intrinsically integrates Nesterov acceleration into the algorithmic core. It introduces a novel theory of inexact estimation sequences and achieves universal acceleration for both strongly convex and general convex, smooth and nonsmooth composite problems. DCatalyst couples an inexact momentum-accelerated proximal outer loop with arbitrary decentralized inner loops (e.g., DGD, EXTRA, GT-OG), jointly modeling consensus error and subproblem inexactness. Theoretically, it attains optimal communication and computation complexity—up to logarithmic factors. Experiments demonstrate substantial improvements in convergence speed and practical efficacy across diverse algorithms and problem settings.
📝 Abstract
We study decentralized optimization over a network of agents, modeled as graphs, with no central server. The goal is to minimize $f+r$, where $f$ represents a (strongly) convex function averaging the local agents' losses, and $r$ is a convex, extended-value function. We introduce DCatalyst, a unified black-box framework that integrates Nesterov acceleration into decentralized optimization algorithms. %, enhancing their performance. At its core, DCatalyst operates as an extit{inexact}, extit{momentum-accelerated} proximal method (forming the outer loop) that seamlessly incorporates any selected decentralized algorithm (as the inner loop). We demonstrate that DCatalyst achieves optimal communication and computational complexity (up to log-factors) across various decentralized algorithms and problem instances. Notably, it extends acceleration capabilities to problem classes previously lacking accelerated solution methods, thereby broadening the effectiveness of decentralized methods. On the technical side, our framework introduce the {it inexact estimating sequences}--a novel extension of the well-known Nesterov's estimating sequences, tailored for the minimization of composite losses in decentralized settings. This method adeptly handles consensus errors and inexact solutions of agents' subproblems, challenges not addressed by existing models.