🤖 AI Summary
This work addresses distributed nonsmooth convex composite optimization. We propose a decentralized Halpern–Peaceman–Rachford (dHPR) algorithm. Methodologically, dHPR is the first to embed the Halpern iteration scheme into a decentralized Peaceman–Rachford framework, integrating symmetric Gauss–Seidel splitting and operator decoupling to achieve fully decentralized parallel computation—without requiring auxiliary proximal terms. Theoretically, we establish a non-ergodic $O(1/k)$ convergence rate, significantly improving both stability and convergence speed over prior decentralized methods. Empirically, dHPR demonstrates superior performance on distributed LASSO, group LASSO, and $ell_1$-regularized logistic regression tasks, consistently outperforming state-of-the-art distributed optimization algorithms in both convergence speed and communication efficiency.
📝 Abstract
This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.