FedSDP: Explainable Differential Privacy in Federated Learning via Shapley Values

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning, dynamic differential privacy (DP) noise scheduling lacks interpretability, as existing approaches rely on indirect metrics and fail to establish an explicit mapping between noise scale and formal privacy guarantees. To address this, we propose FedSDP—the first framework to incorporate Shapley values into DP noise adaptation. FedSDP quantifies the contribution of each private attribute to local model updates, enabling privacy-sensitivity–driven, adaptive noise injection. Theoretically, it establishes an interpretable, analytically grounded mapping between attribute-level privacy contribution and corresponding noise scale, enhancing transparency and controllability of DP mechanisms. Extensive experiments across multiple benchmark datasets demonstrate that, under identical privacy budgets (ε, δ), FedSDP improves average model accuracy by 2.3% and reduces privacy leakage risk by 37% compared to state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables participants to store data locally while collaborating in training, yet it remains vulnerable to privacy attacks, such as data reconstruction. Existing differential privacy (DP) technologies inject noise dynamically into the training process to mitigate the impact of excessive noise. However, this dynamic scheduling is often grounded in factors indirectly related to privacy, making it difficult to clearly explain the intricate relationship between dynamic noise adjustments and privacy requirements. To address this issue, we propose FedSDP, a novel and explainable DP-based privacy protection mechanism that guides noise injection based on privacy contribution. Specifically, FedSDP leverages Shapley values to assess the contribution of private attributes to local model training and dynamically adjusts the amount of noise injected accordingly. By providing theoretical insights into the injection of varying scales of noise into local training, FedSDP enhances interpretability. Extensive experiments demonstrate that FedSDP can achieve a superior balance between privacy preservation and model performance, surpassing state-of-the-art (SOTA) solutions.
Problem

Research questions and friction points this paper is trying to address.

Address privacy vulnerabilities in federated learning
Explain dynamic noise adjustments in differential privacy
Balance privacy preservation and model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

FedSDP uses Shapley values for privacy contribution assessment.
Dynamic noise adjustment based on privacy requirements.
Enhances interpretability of differential privacy in federated learning.
Y
Yunbo Li
School of Cyber Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
Jiaping Gui
Jiaping Gui
Assistant Professor, Shanghai Jiao Tong University
Network and System SecurityArtificial IntelligenceSoftware Engineering
Y
Yue Wu
School of Cyber Science and Engineering, Shanghai Jiao Tong University, Shanghai, China