dHPR: A Distributed Halpern Peaceman--Rachford Method for Non-smooth Distributed Optimization Problems

📅 2025-11-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses distributed nonsmooth convex composite optimization. We propose a decentralized Halpern–Peaceman–Rachford (dHPR) algorithm. Methodologically, dHPR is the first to embed the Halpern iteration scheme into a decentralized Peaceman–Rachford framework, integrating symmetric Gauss–Seidel splitting and operator decoupling to achieve fully decentralized parallel computation—without requiring auxiliary proximal terms. Theoretically, we establish a non-ergodic $O(1/k)$ convergence rate, significantly improving both stability and convergence speed over prior decentralized methods. Empirically, dHPR demonstrates superior performance on distributed LASSO, group LASSO, and $ell_1$-regularized logistic regression tasks, consistently outperforming state-of-the-art distributed optimization algorithms in both convergence speed and communication efficiency.

Technology Category

Application Category

📝 Abstract
This paper introduces the distributed Halpern Peaceman--Rachford (dHPR) method, an efficient algorithm for solving distributed convex composite optimization problems with non-smooth objectives, which achieves a non-ergodic $O(1/k)$ iteration complexity regarding Karush--Kuhn--Tucker residual. By leveraging the symmetric Gauss--Seidel decomposition, the dHPR effectively decouples the linear operators in the objective functions and consensus constraints while maintaining parallelizability and avoiding additional large proximal terms, leading to a decentralized implementation with provably fast convergence. The superior performance of dHPR is demonstrated through comprehensive numerical experiments on distributed LASSO, group LASSO, and $L_1$-regularized logistic regression problems.
Problem

Research questions and friction points this paper is trying to address.

Solving distributed non-smooth convex composite optimization problems efficiently
Decoupling linear operators while maintaining parallelizability in distributed optimization
Achieving fast convergence for distributed LASSO and regularized logistic regression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed Halpern Peaceman-Rachford method for non-smooth optimization
Uses symmetric Gauss-Seidel decomposition to decouple operators
Achieves fast convergence with decentralized parallel implementation
🔎 Similar Papers
No similar papers found.
Z
Zhangcheng Feng
Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
D
Defeng Sun
Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Hong Kong
Yancheng Yuan
Yancheng Yuan
Assistant Professor, The Hong Kong Polytechnic University
Optimization AlgorithmsMachine Learning
Guojun Zhang
Guojun Zhang
MiniMax
LLMAlignment/RLHFTransfer Learning