Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach

πŸ“… 2025-08-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address privacy leakage from model update sharing in decentralized minimax optimization and the degradation of convergence in nonconvex settings caused by differential privacy (DP) noise, this paper proposes DPMixSGDβ€”the first distributed algorithm that simultaneously achieves rigorous DP guarantees and efficient convergence. DPMixSGD innovatively integrates a DP mechanism into local gradient computation and combines STORM-based variance reduction with hybrid stochastic gradient descent to enable privacy-preserving gradient exchange over decentralized topologies. Theoretically, it attains the optimal convergence rate of $O(1/sqrt{T})$ for nonconvex minimax problems, with the privacy budget $varepsilon$ not affecting the asymptotic convergence order. Empirically, DPMixSGD significantly outperforms existing DP-based distributed methods across multiple game-theoretic learning tasks, maintaining high accuracy and stability even under strong privacy protection ($varepsilon leq 2$).

Technology Category

Application Category

πŸ“ Abstract
Decentralized min-max optimization allows multi-agent systems to collaboratively solve global min-max optimization problems by facilitating the exchange of model updates among neighboring agents, eliminating the need for a central server. However, sharing model updates in such systems carry a risk of exposing sensitive data to inference attacks, raising significant privacy concerns. To mitigate these privacy risks, differential privacy (DP) has become a widely adopted technique for safeguarding individual data. Despite its advantages, implementing DP in decentralized min-max optimization poses challenges, as the added noise can hinder convergence, particularly in non-convex scenarios with complex agent interactions in min-max optimization problems. In this work, we propose an algorithm called DPMixSGD (Differential Private Minmax Hybrid Stochastic Gradient Descent), a novel privacy-preserving algorithm specifically designed for non-convex decentralized min-max optimization. Our method builds on the state-of-the-art STORM-based algorithm, one of the fastest decentralized min-max solutions. We rigorously prove that the noise added to local gradients does not significantly compromise convergence performance, and we provide theoretical bounds to ensure privacy guarantees. To validate our theoretical findings, we conduct extensive experiments across various tasks and models, demonstrating the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Enhancing privacy in decentralized min-max optimization
Mitigating sensitive data exposure from model sharing
Maintaining convergence while adding differential privacy noise
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential Private Minmax Hybrid Stochastic Gradient Descent
STORM-based decentralized optimization algorithm
Theoretical privacy guarantees with convergence bounds
πŸ”Ž Similar Papers
No similar papers found.