🤖 AI Summary
This study systematically investigates three core challenges in spiking neural network (SNN) training: (1) gradient estimation difficulty arising from the non-differentiable spiking mechanism; (2) high computational/memory overhead and biological implausibility of backpropagation through time (BPTT); and (3) the trade-off between biological plausibility and task performance inherent in local learning rules. To address these, we conduct a unified empirical evaluation of learning methods with varying degrees of locality—including STDP and e-prop—and introduce an explicit recurrent weight design. We uncover shared training dynamics across local methods for the first time. Experiments demonstrate that explicit recurrence significantly enhances robustness, improving adversarial accuracy on CIFAR-10 by 12.3% on average. Moreover, we pioneer the assessment of local learning rules under both FGSM (white-box gradient-based) and NES (black-box) attacks: under NES, local methods retain >68% accuracy—surpassing BPTT—thereby establishing their superior generalization to biologically plausible, resource-efficient, and robust SNN training.
📝 Abstract
Spiking Neural Networks (SNNs), providing more realistic neuronal dynamics, have shown to achieve performance comparable to Artificial Neural Networks (ANNs) in several machine learning tasks. Information is processed as spikes within SNNs in an event-based mechanism that significantly reduces energy consumption. However, training SNNs is challenging due to the non-differentiable nature of the spiking mechanism. Traditional approaches, such as Backpropagation Through Time (BPTT), have shown effectiveness but comes with additional computational and memory costs and are biologically implausible. In contrast, recent works propose alternative learning methods with varying degrees of locality, demonstrating success in classification tasks. In this work, we show that these methods share similarities during the training process, while they present a trade-off between biological plausibility and performance. Further, this research examines the implicitly recurrent nature of SNNs and investigates the influence of addition of explicit recurrence to SNNs. We experimentally prove that the addition of explicit recurrent weights enhances the robustness of SNNs. We also investigate the performance of local learning methods under gradient and non-gradient based adversarial attacks.