Dual Defense: Enhancing Privacy and Mitigating Poisoning Attacks in Federated Learning

๐Ÿ“… 2025-02-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Federated learning (FL) faces dual threats from privacy leakage and poisoning attacks; existing defenses typically address these issues in isolation, often relying on non-colluding dual-server architectures or disrupting the native FL topology, thereby compromising scalability. This paper proposes DDFedโ€”a unified defense framework that requires no additional trusted roles and preserves the original FL topology. Its core is a two-stage anomaly detection mechanism tailored for encrypted model updates: it integrates secure similarity measurement under fully homomorphic encryption (FHE) with feedback-driven collaborative filtering, while inherently safeguarding Byzantine clientsโ€™ privacy. Evaluated across cross-device and cross-silo FL settings, DDFed achieves over 96% detection accuracy against diverse poisoning attacks, ensures end-to-end semantic-level privacy, and significantly enhances both robustness and practicality.

Technology Category

Application Category

๐Ÿ“ Abstract
Federated learning (FL) is inherently susceptible to privacy breaches and poisoning attacks. To tackle these challenges, researchers have separately devised secure aggregation mechanisms to protect data privacy and robust aggregation methods that withstand poisoning attacks. However, simultaneously addressing both concerns is challenging; secure aggregation facilitates poisoning attacks as most anomaly detection techniques require access to unencrypted local model updates, which are obscured by secure aggregation. Few recent efforts to simultaneously tackle both challenges offen depend on impractical assumption of non-colluding two-server setups that disrupt FL's topology, or three-party computation which introduces scalability issues, complicating deployment and application. To overcome this dilemma, this paper introduce a Dual Defense Federated learning (DDFed) framework. DDFed simultaneously boosts privacy protection and mitigates poisoning attacks, without introducing new participant roles or disrupting the existing FL topology. DDFed initially leverages cutting-edge fully homomorphic encryption (FHE) to securely aggregate model updates, without the impractical requirement for non-colluding two-server setups and ensures strong privacy protection. Additionally, we proposes a unique two-phase anomaly detection mechanism for encrypted model updates, featuring secure similarity computation and feedback-driven collaborative selection, with additional measures to prevent potential privacy breaches from Byzantine clients incorporated into the detection process. We conducted extensive experiments on various model poisoning attacks and FL scenarios, including both cross-device and cross-silo FL. Experiments on publicly available datasets demonstrate that DDFed successfully protects model privacy and effectively defends against model poisoning threats.
Problem

Research questions and friction points this paper is trying to address.

Enhance privacy in federated learning
Mitigate poisoning attacks in federated learning
Simultaneously address privacy and poisoning issues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual Defense Federated Learning framework
Fully homomorphic encryption for privacy
Two-phase anomaly detection mechanism
๐Ÿ”Ž Similar Papers
No similar papers found.