🤖 AI Summary
This work addresses the poor robustness of directly trained spiking neural networks (SNNs) under adversarial attacks, which primarily stems from neurons with membrane potentials near the firing threshold being highly susceptible to minor perturbations that trigger state flips. To mitigate this vulnerability, the authors propose Threshold-Guard Optimization (TGO), which integrates two key components: a potential-aware regularization term in the loss function to encourage membrane potentials to stay away from the firing threshold, and stochastic spiking neurons that replace deterministic firing with a probabilistic mechanism. This study is the first to identify threshold-proximal neurons as a critical robustness bottleneck in SNNs and demonstrates that the synergistic design of potential constraints and probabilistic spiking significantly enhances adversarial resilience. Experiments show that TGO substantially reduces both attack success rates and neuron state-flip probabilities under standard adversarial settings.
📝 Abstract
Spiking Neural Networks (SNNs) represent a promising paradigm for energy-efficient neuromorphic computing due to their bio-plausible and spike-driven characteristics. However, the robustness of SNNs in complex adversarial environments remains significantly constrained. In this study, we theoretically demonstrate that those threshold-neighboring spiking neurons are the key factors limiting the robustness of directly trained SNNs. We find that these neurons set the upper limits for the maximum potential strength of adversarial attacks and are prone to state-flipping under minor disturbances. To address this challenge, we propose a Threshold Guarding Optimization (TGO) method, which comprises two key aspects. First, we incorporate additional constraints into the loss function to move neurons'membrane potentials away from their thresholds. It increases SNNs'gradient sparsity, thereby reducing the theoretical upper bound of adversarial attacks. Second, we introduce noisy spiking neurons to transition the neuronal firing mechanism from deterministic to probabilistic, decreasing their state-flipping probability due to minor disturbances. Extensive experiments conducted in standard adversarial scenarios prove that our method significantly enhances the robustness of directly trained SNNs. These findings pave the way for advancing more reliable and secure neuromorphic computing in real-world applications.