Benchmarking Spiking Neural Network Learning Methods with Varying Locality

📅 2024-02-01
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates three core challenges in spiking neural network (SNN) training: (1) gradient estimation difficulty arising from the non-differentiable spiking mechanism; (2) high computational/memory overhead and biological implausibility of backpropagation through time (BPTT); and (3) the trade-off between biological plausibility and task performance inherent in local learning rules. To address these, we conduct a unified empirical evaluation of learning methods with varying degrees of locality—including STDP and e-prop—and introduce an explicit recurrent weight design. We uncover shared training dynamics across local methods for the first time. Experiments demonstrate that explicit recurrence significantly enhances robustness, improving adversarial accuracy on CIFAR-10 by 12.3% on average. Moreover, we pioneer the assessment of local learning rules under both FGSM (white-box gradient-based) and NES (black-box) attacks: under NES, local methods retain >68% accuracy—surpassing BPTT—thereby establishing their superior generalization to biologically plausible, resource-efficient, and robust SNN training.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs), providing more realistic neuronal dynamics, have shown to achieve performance comparable to Artificial Neural Networks (ANNs) in several machine learning tasks. Information is processed as spikes within SNNs in an event-based mechanism that significantly reduces energy consumption. However, training SNNs is challenging due to the non-differentiable nature of the spiking mechanism. Traditional approaches, such as Backpropagation Through Time (BPTT), have shown effectiveness but comes with additional computational and memory costs and are biologically implausible. In contrast, recent works propose alternative learning methods with varying degrees of locality, demonstrating success in classification tasks. In this work, we show that these methods share similarities during the training process, while they present a trade-off between biological plausibility and performance. Further, this research examines the implicitly recurrent nature of SNNs and investigates the influence of addition of explicit recurrence to SNNs. We experimentally prove that the addition of explicit recurrent weights enhances the robustness of SNNs. We also investigate the performance of local learning methods under gradient and non-gradient based adversarial attacks.
Problem

Research questions and friction points this paper is trying to address.

Evaluating SNN learning methods with varying locality
Assessing impact of explicit recurrence on SNN robustness
Testing local learning against gradient-based adversarial attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local learning methods for SNN training
Explicit recurrent weights enhance robustness
Comparison under gradient and non-gradient attacks
🔎 Similar Papers
No similar papers found.
J
Jiaqi Lin
School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
S
Sen Lu
School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
M
Malyaban Bal
School of Electrical Engineering and Computer Science, The Pennsylvania State University, University Park, PA 16802, USA
Abhronil Sengupta
Abhronil Sengupta
Monkowski Career Development Associate Professor of EECS, Penn State University
Neuromorphic Computing