Privacy in Federated Learning with Spiking Neural Networks

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work presents the first systematic investigation into the robustness of spiking neural networks (SNNs) against gradient inversion attacks in federated learning. Addressing the privacy threat posed by sensitive data leakage via gradient reversal, we analyze how SNNs’ event-driven dynamics—combined with surrogate gradient training—naturally obfuscate gradient information, hindering meaningful input reconstruction. We adapt multiple gradient leakage attack paradigms to the spiking domain and conduct empirical evaluations across diverse datasets. Results demonstrate that reconstructed images from SNN gradients exhibit substantially higher noise levels and poorer spatiotemporal coherence compared to those from artificial neural networks (ANNs), markedly reducing data exposure risk. Our study reveals an intrinsic privacy-preserving property of SNNs, offering both theoretical insight and empirical evidence for brain-inspired, privacy-aware federated learning frameworks.

Technology Category

Application Category

📝 Abstract
Spiking neural networks (SNNs) have emerged as prominent candidates for embedded and edge AI. Their inherent low power consumption makes them far more efficient than conventional ANNs in scenarios where energy budgets are tightly constrained. In parallel, federated learning (FL) has become the prevailing training paradigm in such settings, enabling on-device learning while limiting the exposure of raw data. However, gradient inversion attacks represent a critical privacy threat in FL, where sensitive training data can be reconstructed directly from shared gradients. While this vulnerability has been widely investigated in conventional ANNs, its implications for SNNs remain largely unexplored. In this work, we present the first comprehensive empirical study of gradient leakage in SNNs across diverse data domains. SNNs are inherently non-differentiable and are typically trained using surrogate gradients, which we hypothesized would be less correlated with the original input and thus less informative from a privacy perspective. To investigate this, we adapt different gradient leakage attacks to the spike domain. Our experiments reveal a striking contrast with conventional ANNs: whereas ANN gradients reliably expose salient input content, SNN gradients yield noisy, temporally inconsistent reconstructions that fail to recover meaningful spatial or temporal structure. These results indicate that the combination of event-driven dynamics and surrogate-gradient training substantially reduces gradient informativeness. To the best of our knowledge, this work provides the first systematic benchmark of gradient inversion attacks for spiking architectures, highlighting the inherent privacy-preserving potential of neuromorphic computation.
Problem

Research questions and friction points this paper is trying to address.

Investigates gradient inversion attack vulnerability in federated learning with spiking neural networks.
Evaluates privacy risks by adapting gradient leakage attacks to spike-based training paradigms.
Benchmarks gradient informativeness to assess inherent privacy advantages of neuromorphic computation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

SNN surrogate gradients reduce gradient leakage risk
Event-driven dynamics lower gradient informativeness for privacy
Spiking architectures inherently resist gradient inversion attacks
🔎 Similar Papers
No similar papers found.
D
Dogukan Aksu
Centre for Secure Information Technologies (CSIT), Queen’s University Belfast, UK
J
Jesus Martinez del Rincon
Centre for Secure Information Technologies (CSIT), Queen’s University Belfast, UK
Ihsen Alouani
Ihsen Alouani
CSIT, Queen's University Belfast, UK
ML/AI Security & PrivacySystems Security