🤖 AI Summary
This work addresses the problem of decentralized multi-agent optimization where agents collaboratively minimize a composite objective function—comprising a sum of local smooth strongly convex losses and a nonsmooth convex regularizer—using only local information. Building upon the three-operator splitting framework, the proposed method introduces a BCV preconditioned metric combined with local backtracking stepsizes and a lightweight minimal consensus protocol to achieve adaptive decentralized optimization. The algorithm enables each agent to select its own adaptive stepsize locally and guarantees linear convergence under global strong convexity and partial smoothness of the nonsmooth term, while achieving sublinear convergence in the general convex setting. Numerical experiments corroborate both the theoretical convergence properties and the practical efficacy of the proposed approach.
📝 Abstract
The paper studies decentralized optimization over networks, where agents minimize a sum of {\it locally} smooth (strongly) convex losses and plus a nonsmooth convex extended value term. We propose decentralized methods wherein agents {\it adaptively} adjust their stepsize via local backtracking procedures coupled with lightweight min-consensus protocols. Our design stems from a three-operator splitting factorization applied to an equivalent reformulation of the problem. The reformulation is endowed with a new BCV preconditioning metric (Bertsekas-O'Connor-Vandenberghe), which enables efficient decentralized implementation and local stepsize adjustments. We establish robust convergence guarantees. Under mere convexity, the proposed methods converge with a sublinear rate. Under strong convexity of the sum-function, and assuming the nonsmooth component is partly smooth, we further prove linear convergence. Numerical experiments corroborate the theory and highlight the effectiveness of the proposed adaptive stepsize strategy.