Local Layer-wise Differential Privacy in Federated Learning

📅 2026-01-05
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant utility degradation in existing differentially private federated learning approaches, which typically apply uniform noise globally and struggle to balance privacy and model performance. To overcome this limitation, the authors propose LaDP, a novel mechanism that, for the first time, integrates inter-layer utility contributions with KL divergence to quantify privacy leakage risk, enabling fine-grained, adaptive noise injection at the layer level. The method provides rigorous $(\varepsilon, \delta)$-differential privacy guarantees along with convergence analysis. Experimental results demonstrate that LaDP reduces average noise by 46.14% and improves accuracy by 102.99% on CIFAR-10/100 benchmarks. Under the same privacy budget, it outperforms state-of-the-art methods by 25.18% in accuracy and increases the FID of reconstruction attacks by over 12.84%, indicating stronger privacy protection.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative model training without direct data sharing, yet it remains vulnerable to privacy attacks such as model inversion and membership inference. Existing differential privacy (DP) solutions for FL often inject noise uniformly across the entire model, degrading utility while providing suboptimal privacy-utility tradeoffs. To address this, we propose LaDP, a novel layer-wise adaptive noise injection mechanism for FL that optimizes privacy protection while preserving model accuracy. LaDP leverages two key insights: (1) neural network layers contribute unevenly to model utility, and (2) layer-wise privacy leakage can be quantified via KL divergence between local and global model distributions. LaDP dynamically injects noise into selected layers based on their privacy sensitivity and importance to model performance. We provide a rigorous theoretical analysis, proving that LaDP satisfies $(\epsilon, \delta)$-DP guarantees and converges under bounded noise. Extensive experiments on CIFAR-10/100 datasets demonstrate that LaDP reduces noise injection by 46.14% on average compared to state-of-the-art (SOTA) methods while improving accuracy by 102.99%. Under the same privacy budget, LaDP outperforms SOTA solutions like Dynamic Privacy Allocation LDP and AdapLDP by 25.18% and 6.1% in accuracy, respectively. Additionally, LaDP robustly defends against reconstruction attacks, increasing the FID of the reconstructed private data by $>$12.84% compared to all baselines. Our work advances the practical deployment of privacy-preserving FL with minimal utility loss.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Differential Privacy
Privacy-Utility Tradeoff
Layer-wise Privacy
Model Utility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise Differential Privacy
Adaptive Noise Injection
Federated Learning
KL Divergence
Privacy-Utility Tradeoff
Y
Yunbo Li
Shanghai Jiao Tong University, Shanghai, 200240, China
Jiaping Gui
Jiaping Gui
Assistant Professor, Shanghai Jiao Tong University
Network and System SecurityArtificial IntelligenceSoftware Engineering
F
Fanchao Meng
Shanghai Jiao Tong University, Shanghai, 200240, China
Y
Yue Wu
Shanghai Jiao Tong University, Shanghai, 200240, China