π€ AI Summary
To address privacy leakage from model update sharing in decentralized minimax optimization and the degradation of convergence in nonconvex settings caused by differential privacy (DP) noise, this paper proposes DPMixSGDβthe first distributed algorithm that simultaneously achieves rigorous DP guarantees and efficient convergence. DPMixSGD innovatively integrates a DP mechanism into local gradient computation and combines STORM-based variance reduction with hybrid stochastic gradient descent to enable privacy-preserving gradient exchange over decentralized topologies. Theoretically, it attains the optimal convergence rate of $O(1/sqrt{T})$ for nonconvex minimax problems, with the privacy budget $varepsilon$ not affecting the asymptotic convergence order. Empirically, DPMixSGD significantly outperforms existing DP-based distributed methods across multiple game-theoretic learning tasks, maintaining high accuracy and stability even under strong privacy protection ($varepsilon leq 2$).
π Abstract
Decentralized min-max optimization allows multi-agent systems to collaboratively solve global min-max optimization problems by facilitating the exchange of model updates among neighboring agents, eliminating the need for a central server. However, sharing model updates in such systems carry a risk of exposing sensitive data to inference attacks, raising significant privacy concerns. To mitigate these privacy risks, differential privacy (DP) has become a widely adopted technique for safeguarding individual data. Despite its advantages, implementing DP in decentralized min-max optimization poses challenges, as the added noise can hinder convergence, particularly in non-convex scenarios with complex agent interactions in min-max optimization problems. In this work, we propose an algorithm called DPMixSGD (Differential Private Minmax Hybrid Stochastic Gradient Descent), a novel privacy-preserving algorithm specifically designed for non-convex decentralized min-max optimization. Our method builds on the state-of-the-art STORM-based algorithm, one of the fastest decentralized min-max solutions. We rigorously prove that the noise added to local gradients does not significantly compromise convergence performance, and we provide theoretical bounds to ensure privacy guarantees. To validate our theoretical findings, we conduct extensive experiments across various tasks and models, demonstrating the effectiveness of our approach.