On Evaluating the Poisoning Robustness of Federated Learning under Local Differential Privacy

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient robustness of local differential privacy federated learning (LDPFL) against model poisoning attacks (MPAs). We propose a scalable, adaptive poisoning framework that introduces a novel reverse-training mechanism tailored to LDPFL: it embeds adaptive gradient perturbations while strictly satisfying LDP constraints, thereby evading mainstream robust aggregation defenses—including Multi-Krum and trimmed mean. Extensive experiments across diverse neural architectures, benchmark datasets, and LDPFL protocols demonstrate that our attack significantly increases global training loss and degrades model accuracy, exposing the fundamental vulnerability of existing LDPFL systems under adaptive threats. The study provides critical adversarial insights and an empirical evaluation benchmark for co-designing privacy-preserving and security-resilient federated learning mechanisms.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) combined with local differential privacy (LDP) enables privacy-preserving model training across decentralized data sources. However, the decentralized data-management paradigm leaves LDPFL vulnerable to participants with malicious intent. The robustness of LDPFL protocols, particularly against model poisoning attacks (MPA), where adversaries inject malicious updates to disrupt global model convergence, remains insufficiently studied. In this paper, we propose a novel and extensible model poisoning attack framework tailored for LDPFL settings. Our approach is driven by the objective of maximizing the global training loss while adhering to local privacy constraints. To counter robust aggregation mechanisms such as Multi-Krum and trimmed mean, we develop adaptive attacks that embed carefully crafted constraints into a reverse training process, enabling evasion of these defenses. We evaluate our framework across three representative LDPFL protocols, three benchmark datasets, and two types of deep neural networks. Additionally, we investigate the influence of data heterogeneity and privacy budgets on attack effectiveness. Experimental results demonstrate that our adaptive attacks can significantly degrade the performance of the global model, revealing critical vulnerabilities and highlighting the need for more robust LDPFL defense strategies against MPA. Our code is available at https://github.com/ZiJW/LDPFL-Attack
Problem

Research questions and friction points this paper is trying to address.

Evaluating poisoning robustness in federated learning under local differential privacy
Assessing vulnerability to model poisoning attacks in decentralized privacy-preserving training
Investigating adaptive attacks that bypass robust aggregation defenses in LDPFL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive attacks embedding constraints in reverse training
Maximizing global loss under local privacy constraints
Evading robust aggregation mechanisms like Multi-Krum