Robust Federated Learning Against Poisoning Attacks: A GAN-Based Defense Framework

📅 2025-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning is vulnerable to poisoning attacks, and existing defenses rely on external datasets or assume a fixed proportion of malicious clients, limiting generalizability and scalability. To address this, we propose the first server-side defense framework that requires no external data and makes no assumptions about the fraction of malicious clients. Our approach leverages a conditional generative adversarial network (cGAN) to dynamically synthesize task-relevant, discriminative synthetic data; integrates update consistency checking; and incorporates a differential privacy enhancement module to verify the authenticity of client model updates in real time prior to aggregation. The framework is fully adaptive and end-to-end integrable. Evaluated on CIFAR-10 and MNIST, it achieves >96% true positive rate and >94% true negative rate, with <1.2% accuracy degradation—outperforming state-of-the-art methods in robustness and practicality.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative model training across decentralized devices without sharing raw data, but it remains vulnerable to poisoning attacks that compromise model integrity. Existing defenses often rely on external datasets or predefined heuristics (e.g. number of malicious clients), limiting their effectiveness and scalability. To address these limitations, we propose a privacy-preserving defense framework that leverages a Conditional Generative Adversarial Network (cGAN) to generate synthetic data at the server for authenticating client updates, eliminating the need for external datasets. Our framework is scalable, adaptive, and seamlessly integrates into FL workflows. Extensive experiments on benchmark datasets demonstrate its robust performance against a variety of poisoning attacks, achieving high True Positive Rate (TPR) and True Negative Rate (TNR) of malicious and benign clients, respectively, while maintaining model accuracy. The proposed framework offers a practical and effective solution for securing federated learning systems.
Problem

Research questions and friction points this paper is trying to address.

Defends FL against poisoning attacks without external data
Uses cGAN for scalable, privacy-preserving client authentication
Maintains model accuracy while detecting malicious clients effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses cGAN for synthetic data generation
Authenticates client updates without external data
Scalable and adaptive FL defense framework
🔎 Similar Papers
No similar papers found.