🤖 AI Summary
To address the challenge of jointly minimizing communication and computational overhead in decentralized optimization, this paper proposes Stabilized Proximal Decentralized Optimization (SPDO). Unlike existing proximal methods—which require high-precision subproblem solutions to exploit functional similarity—SPDO is the first to relax subproblem accuracy within the Proximal Decentralized Optimization (PDO) framework, while introducing average functional similarity modeling and a stability-preservation mechanism. Theoretically, SPDO achieves state-of-the-art (SOTA) optimal communication and computational complexity. Empirically, on diverse network topologies and non-i.i.d. data distributions, SPDO reduces the number of communication rounds by 30%–50% and cuts total computation time by over 40%, demonstrating significant efficiency gains without sacrificing convergence stability or accuracy.
📝 Abstract
Reducing communication complexity is critical for efficient decentralized optimization. The proximal decentralized optimization (PDO) framework is particularly appealing, as methods within this framework can exploit functional similarity among nodes to reduce communication rounds. Specifically, when local functions at different nodes are similar, these methods achieve faster convergence with fewer communication steps. However, existing PDO methods often require highly accurate solutions to subproblems associated with the proximal operator, resulting in significant computational overhead. In this work, we propose the Stabilized Proximal Decentralized Optimization (SPDO) method, which achieves state-of-the-art communication and computational complexities within the PDO framework. Additionally, we refine the analysis of existing PDO methods by relaxing subproblem accuracy requirements and leveraging average functional similarity. Experimental results demonstrate that SPDO significantly outperforms existing methods.