🤖 AI Summary
Addressing the challenge of asynchronous decentralized nonconvex optimization and learning in distributed networks, this paper proposes an asynchronous decentralized ADMM algorithm based on stochastic block-coordinate Douglas–Rachford splitting. The method operates without global clock synchronization or central coordination: each agent performs local updates and communicates asynchronously with neighbors. Crucially, it converges to a first-order stationary point even without convexity assumptions—a theoretical guarantee previously unattained for asynchronous decentralized nonconvex optimization. This constitutes the first such algorithm with rigorous convergence guarantees under asynchrony and decentralization. Experiments on distributed phase retrieval and sparse principal component analysis demonstrate its efficacy: compared to synchronous baselines, it achieves significantly higher communication efficiency and greater robustness to heterogeneous computation delays, while maintaining stable convergence behavior and strong adaptability to dynamic, fully decentralized network topologies.
📝 Abstract
In this paper, we consider nonconvex decentralised optimisation and learning over a network of distributed agents. We develop an ADMM algorithm based on the Randomised Block Coordinate Douglas-Rachford splitting method which enables agents in the network to distributedly and asynchronously compute a set of first-order stationary solutions of the problem. To the best of our knowledge, this is the first decentralised and asynchronous algorithm for solving nonconvex optimisation problems with convergence proof. The numerical examples demonstrate the efficiency of the proposed algorithm for distributed Phase Retrieval and sparse Principal Component Analysis problems.