Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity

📅 2024-09-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient resilience of federated learning (FL) against poisoning attacks in cybersecurity applications. To this end, we develop the first reproducible and scalable FL testbed tailored for edge devices—integrating Raspberry Pi and NVIDIA Jetson hardware—to support distributed intrusion detection system development and adversarial evaluation. Methodologically, we implement gradient- and label-level poisoning attacks within the Flower framework and systematically quantify model robustness under diverse attack scenarios. Our contributions are threefold: (1) the first empirical validation of FL’s practicality on resource-constrained edge devices while preserving data privacy; (2) a comprehensive characterization of FL’s high sensitivity to multiple poisoning strategies; and (3) the proposal of lightweight, deployable mitigation mechanisms. Collectively, these results establish an evidence-based foundation and concrete technical pathways toward secure and trustworthy edge-based federated learning.

Technology Category

Application Category

📝 Abstract
This paper presents the design and implementation of a Federated Learning (FL) testbed, focusing on its application in cybersecurity and evaluating its resilience against poisoning attacks. Federated Learning allows multiple clients to collaboratively train a global model while keeping their data decentralized, addressing critical needs for data privacy and security, particularly in sensitive fields like cybersecurity. Our testbed, built using Raspberry Pi and Nvidia Jetson hardware by running the Flower framework, facilitates experimentation with various FL frameworks, assessing their performance, scalability, and ease of integration. Through a case study on federated intrusion detection systems, the testbed's capabilities are shown in detecting anomalies and securing critical infrastructure without exposing sensitive network data. Comprehensive poisoning tests, targeting both model and data integrity, evaluate the system's robustness under adversarial conditions. The results show that while federated learning enhances data privacy and distributed learning, it remains vulnerable to poisoning attacks, which must be mitigated to ensure its reliability in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluating FL testbed resilience against poisoning attacks in cybersecurity
Assessing FL frameworks for performance, scalability, and integration ease
Analyzing FL vulnerabilities to poisoning attacks despite privacy benefits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning testbed for cybersecurity resilience
Raspberry Pi and Nvidia Jetson hardware implementation
Poisoning attack robustness evaluation in FL
🔎 Similar Papers
No similar papers found.
H
Hao Jian Huang
Department of Computer Science, University at Albany, SUNY
B
Bekzod Iskandarov
Department of Computer Science, University at Albany, SUNY
M
Mizanur Rahman
Department of Information Science and Technology, University at Albany, SUNY
H
Hakan T. Otal
Department of Information Science and Technology, University at Albany, SUNY
M
M. Abdullah Canbaz
Department of Information Science and Technology, University at Albany, SUNY