🤖 AI Summary
To address indirect data leakage caused by gradient transmission in decentralized federated learning (DFL), this paper proposes a lossless privacy-preserving aggregation mechanism. Methodologically, it introduces— for the first time—the theory of noise-differential injection and noise-flow conservation, establishing a framework comprising stochastic noise-differential design, neighbor-coupled randomness modeling, and provably enhanced differential privacy—ensuring strict global noise cancellation while achieving lossless privacy-accuracy co-optimization. Theoretically, the mechanism improves privacy gain by a factor of √2. Empirically, it maintains the original DFL model’s accuracy while boosting average test accuracy by 13% over conventional noise-addition baselines; rigorous experiments further verify both irrecoverability of raw data and unbiasedness of the aggregated gradients.
📝 Abstract
Privacy concerns arise as sensitive data proliferate. Despite decentralized federated learning (DFL) aggregating gradients from neighbors to avoid direct data transmission, it still poses indirect data leaks from the transmitted gradients. Existing privacy-preserving methods for DFL add noise to gradients. They either diminish the model predictive accuracy or suffer from ineffective gradient protection. In this paper, we propose a novel lossless privacy-preserving aggregation rule named LPPA to enhance gradient protection as much as possible but without loss of DFL model predictive accuracy. LPPA subtly injects the noise difference between the sent and received noise into transmitted gradients for gradient protection. The noise difference incorporates neighbors' randomness for each client, effectively safeguarding against data leaks. LPPA employs the noise flow conservation theory to ensure that the noise impact can be globally eliminated. The global sum of all noise differences remains zero, ensuring that accurate gradient aggregation is unaffected and the model accuracy remains intact. We theoretically prove that the privacy-preserving capacity of LPPA is sqrt{2} times greater than that of noise addition, while maintaining comparable model accuracy to the standard DFL aggregation without noise injection. Experimental results verify the theoretical findings and show that LPPA achieves a 13% mean improvement in accuracy over noise addition. We also demonstrate the effectiveness of LPPA in protecting raw data and guaranteeing lossless model accuracy.