DelGrad: Exact event-based gradients in spiking networks for training delays and weights

📅 2024-04-30
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Spiking neural networks (SNNs) face fundamental bottlenecks in jointly optimizing transmission delays and synaptic weights on neuromorphic hardware—including inaccurate gradient estimation, reliance on discrete-time approximations, and poor hardware compatibility. This paper introduces DelGrad, the first event-driven analytical method that computes exact gradients of loss with respect to both spike timing (delay) and synaptic weights directly, without recording membrane potentials or discretizing time. Its core innovation lies in enabling precise, joint differentiability over delays and weights—naturally expanding the temporal search space and supporting chip-in-the-loop training. Evaluated on the BrainScaleS-2 neuromorphic system, DelGrad achieves higher classification accuracy under noisy mixed-signal conditions, reduces parameter count by 37%, improves noise robustness, and significantly lowers memory and I/O overhead.

Technology Category

Application Category

📝 Abstract
Spiking neural networks (SNNs) inherently rely on the timing of signals for representing and processing information. Incorporating trainable transmission delays, alongside synaptic weights, is crucial for shaping these temporal dynamics. While recent methods have shown the benefits of training delays and weights in terms of accuracy and memory efficiency, they rely on discrete time, approximate gradients, and full access to internal variables like membrane potentials. This limits their precision, efficiency, and suitability for neuromorphic hardware due to increased memory requirements and I/O bandwidth demands. To address these challenges, we propose DelGrad, an analytical, event-based method to compute exact loss gradients for both synaptic weights and delays. The inclusion of delays in the training process emerges naturally within our proposed formalism, enriching the model's search space with a temporal dimension. Moreover, DelGrad, grounded purely in spike timing, eliminates the need to track additional variables such as membrane potentials. To showcase this key advantage, we demonstrate the functionality and benefits of DelGrad on the BrainScaleS-2 neuromorphic platform, by training SNNs in a chip-in-the-loop fashion. For the first time, we experimentally demonstrate the memory efficiency and accuracy benefits of adding delays to SNNs on noisy mixed-signal hardware. Additionally, these experiments also reveal the potential of delays for stabilizing networks against noise. DelGrad opens a new way for training SNNs with delays on neuromorphic hardware, which results in less number of required parameters, higher accuracy and ease of hardware training.
Problem

Research questions and friction points this paper is trying to address.

Pulse Neural Networks (SNNs)
Training Efficiency
Neuromorphic Hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

DelGrad
event-driven learning
BrainScaleS-2 hardware
J
Julian Goltz
Kirchhoff-Institute for Physics, Heidelberg University; Department of Physiology, University of Bern
J
Jimmy Weber
Institute of Neuroinformatics, University of Zurich and ETH Zurich
Laura Kriener
Laura Kriener
Postdoctoral Researcher, Institute of Neuroinformatics, University of Zurich & ETH Zurich
Artificial IntelligenceBrain-Inspired ComputingComputational NeuroscienceDeep Learning
P
Peter Lake
Kirchhoff-Institute for Physics, Heidelberg University
M
M. Payvand
Institute of Neuroinformatics, University of Zurich and ETH Zurich
Mihai A. Petrovici
Mihai A. Petrovici
Group Leader, NeuroTMA Lab, University of Bern
Brain-Inspired ComputingNeuromorphicsComputational NeuroscienceTheoretical Neuroscience