🤖 AI Summary
This work presents the first systematic investigation into the robustness of spiking neural networks (SNNs) against gradient inversion attacks in federated learning. Addressing the privacy threat posed by sensitive data leakage via gradient reversal, we analyze how SNNs’ event-driven dynamics—combined with surrogate gradient training—naturally obfuscate gradient information, hindering meaningful input reconstruction. We adapt multiple gradient leakage attack paradigms to the spiking domain and conduct empirical evaluations across diverse datasets. Results demonstrate that reconstructed images from SNN gradients exhibit substantially higher noise levels and poorer spatiotemporal coherence compared to those from artificial neural networks (ANNs), markedly reducing data exposure risk. Our study reveals an intrinsic privacy-preserving property of SNNs, offering both theoretical insight and empirical evidence for brain-inspired, privacy-aware federated learning frameworks.
📝 Abstract
Spiking neural networks (SNNs) have emerged as prominent candidates for embedded and edge AI. Their inherent low power consumption makes them far more efficient than conventional ANNs in scenarios where energy budgets are tightly constrained. In parallel, federated learning (FL) has become the prevailing training paradigm in such settings, enabling on-device learning while limiting the exposure of raw data. However, gradient inversion attacks represent a critical privacy threat in FL, where sensitive training data can be reconstructed directly from shared gradients. While this vulnerability has been widely investigated in conventional ANNs, its implications for SNNs remain largely unexplored. In this work, we present the first comprehensive empirical study of gradient leakage in SNNs across diverse data domains. SNNs are inherently non-differentiable and are typically trained using surrogate gradients, which we hypothesized would be less correlated with the original input and thus less informative from a privacy perspective. To investigate this, we adapt different gradient leakage attacks to the spike domain. Our experiments reveal a striking contrast with conventional ANNs: whereas ANN gradients reliably expose salient input content, SNN gradients yield noisy, temporally inconsistent reconstructions that fail to recover meaningful spatial or temporal structure. These results indicate that the combination of event-driven dynamics and surrogate-gradient training substantially reduces gradient informativeness. To the best of our knowledge, this work provides the first systematic benchmark of gradient inversion attacks for spiking architectures, highlighting the inherent privacy-preserving potential of neuromorphic computation.