Differential Privacy Analysis of Decentralized Gossip Averaging under Varying Threat Models

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized gossip-averaging training without a central aggregator and under heterogeneous node trust, node-level differential privacy (DP) faces challenges including ill-defined privacy leakage bounds and loose analytical relaxations. This paper proposes a novel privacy analysis framework based on linear system modeling, unifying the characterization of privacy leakage mechanisms—both with and without secure aggregation—for the first time. It tightens the growth of Rényi DP parameters from $O(T^2)$ to $O(T)$. Theoretical analysis proves the bound is tight and scalable; experiments on MNIST logistic regression show that, under identical privacy budgets, our method achieves utility approaching that of centralized baselines. The core contribution is the establishment of the first node-level DP analysis paradigm for gossip networks under heterogeneous trust models, enabling a substantial improvement in the privacy–utility trade-off.

Technology Category

Application Category

📝 Abstract
Fully decentralized training of machine learning models offers significant advantages in scalability, robustness, and fault tolerance. However, achieving differential privacy (DP) in such settings is challenging due to the absence of a central aggregator and varying trust assumptions among nodes. In this work, we present a novel privacy analysis of decentralized gossip-based averaging algorithms with additive node-level noise, both with and without secure summation over each node's direct neighbors. Our main contribution is a new analytical framework based on a linear systems formulation that accurately characterizes privacy leakage across these scenarios. This framework significantly improves upon prior analyses, for example, reducing the R'enyi DP parameter growth from $O(T^2)$ to $O(T)$, where $T$ is the number of training rounds. We validate our analysis with numerical results demonstrating superior DP bounds compared to existing approaches. We further illustrate our analysis with a logistic regression experiment on MNIST image classification in a fully decentralized setting, demonstrating utility comparable to central aggregation methods.
Problem

Research questions and friction points this paper is trying to address.

Analyzing differential privacy in decentralized gossip averaging algorithms
Improving privacy leakage analysis with a linear systems framework
Validating DP bounds in decentralized ML training scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized gossip averaging with noise
Linear systems framework for privacy analysis
Improved Rényi DP bounds from O(T²) to O(T)