The Robustness of Spiking Neural Networks in Federated Learning with Compression Against Non-omniscient Byzantine Attacks

📅 2025-01-06
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
Federated learning (FL) in IoT faces dual challenges of omniscient-agnostic Byzantine attacks and severe bandwidth constraints. Method: This paper pioneers a systematic investigation into the synergistic advantages of spiking neural networks (SNNs) over artificial neural networks (ANNs) in terms of both adversarial robustness and communication efficiency. We propose a novel paradigm integrating Top-Îș gradient sparsification with the intrinsic sparse spike-based activation of SNNs, jointly optimizing model update compression and malicious client filtering. Contribution/Results: Under omniscient-agnostic Byzantine attacks (e.g., MinMax), FL-SNN achieves ~40% higher accuracy than baseline FL methods, significantly reduces communication overhead, accelerates convergence, and enhances robustness. This work establishes a verifiable framework and new design principles for secure, efficient edge intelligence in resource-constrained, highly adversarial IoT environments.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs), which offer exceptional energy efficiency for inference, and Federated Learning (FL), which offers privacy-preserving distributed training, is a rising area of interest that highly beneficial towards Internet of Things (IoT) devices. Despite this, research that tackles Byzantine attacks and bandwidth limitation in FL-SNNs, both poses significant threats on model convergence and training times, still remains largely unexplored. Going beyond proposing a solution for both of these problems, in this work we highlight the dual benefits of FL-SNNs, against non-omniscient Byzantine adversaries (ones that restrict attackers access to local clients datasets), and greater communication efficiency, over FL-ANNs. Specifically, we discovered that a simple integration of Top-k{appa} sparsification into the FL apparatus can help leverage the advantages of the SNN models in both greatly reducing bandwidth usage and significantly boosting the robustness of FL training against non-omniscient Byzantine adversaries. Most notably, we saw a massive improvement of roughly 40% accuracy gain in FL-SNNs training under the lethal MinMax attack
Problem

Research questions and friction points this paper is trying to address.

Pulse Neural Networks (SNNs)
Federated Learning (FL)
Byzantine Attacks and Bandwidth Constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spiking Neural Networks (SNNs)
Top-kappa Sparsification
Byzantine Robustness
🔎 Similar Papers
No similar papers found.