DP-CSGP: Differentially Private Stochastic Gradient Push with Compressed Communication

📅 2025-12-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses decentralized learning over directed graphs, tackling the joint optimization of differential privacy, gradient compression, and model utility under non-convex smooth objectives. We propose the first framework that simultaneously achieves rigorous (ε,δ)-differential privacy, stochastic gradient push, quantized/sparse gradient compression, and error-feedback-based updates in directed network settings. Theoretically, we establish a convergence rate of O(√(d log(1/δ))/(√nJε)), matching the optimal rate of uncompressed differentially private methods. Empirically, our method attains comparable model accuracy to uncompressed baselines under identical privacy budgets, while substantially reducing communication overhead. Our core contribution is the first DP-compression co-design framework for directed topologies—uniquely combining strict privacy guarantees, order-optimal convergence, and communication efficiency.

Technology Category

Application Category

📝 Abstract
In this paper, we propose a Differentially Private Stochastic Gradient Push with Compressed communication (termed DP-CSGP) for decentralized learning over directed graphs. Different from existing works, the proposed algorithm is designed to maintain high model utility while ensuring both rigorous differential privacy (DP) guarantees and efficient communication. For general non-convex and smooth objective functions, we show that the proposed algorithm achieves a tight utility bound of $mathcal{O}left( sqrt{dlog left( frac{1}δ ight)}/(sqrt{n}Jε) ight)$ ($J$ and $d$ are the number of local samples and the dimension of decision variables, respectively) with $left(ε, δ ight)$-DP guarantee for each node, matching that of decentralized counterparts with exact communication. Extensive experiments on benchmark tasks show that, under the same privacy budget, DP-CSGP achieves comparable model accuracy with significantly lower communication cost than existing decentralized counterparts with exact communication.
Problem

Research questions and friction points this paper is trying to address.

Ensures differential privacy in decentralized learning
Compresses communication to reduce costs efficiently
Maintains model accuracy with privacy guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differential privacy in decentralized learning
Compressed communication for efficiency
Non-convex optimization with tight utility bounds
🔎 Similar Papers
No similar papers found.