Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks

📅 2022-12-02
🏛️ Neuromorph. Comput. Eng.
📈 Citations: 10
Influential: 1
📄 PDF
🤖 AI Summary
Low-power keyword spotting on neuromorphic hardware: This work enhances the scalability and accuracy of EventProp for spiking neural networks (SNNs), addressing its limited loss-function support, high gradient estimation bias, and weak representational capacity. We propose three key innovations: (1) a novel loss-function shaping strategy to generalize EventProp to multi-class losses; (2) a temporally enhanced single-layer recurrent SNN architecture incorporating delay-line inputs, learnable heterogeneous time constants, and spike-latency encoding to improve temporal modeling; and (3) an efficient GPU-accelerated implementation in GeNN, integrating event-driven regularization and multi-class event-based data augmentation. Experiments demonstrate state-of-the-art performance on Spiking Heidelberg Digits and significant improvements over mainstream surrogate-gradient methods on Spiking Speech Commands—achieving 3× faster training and 4× lower memory consumption.
📝 Abstract
Event-based machine learning promises more energy-efficient AI on future neuromorphic hardware. Here, we investigate how the recently discovered Eventprop algorithm for gradient descent on exact gradients in spiking neural networks can be scaled up to challenging keyword recognition benchmarks. We implemented Eventprop in the GPU-enhanced Neural Networks framework and used it for training recurrent spiking neural networks on the Spiking Heidelberg Digits and Spiking Speech Commands datasets. We found that learning depended strongly on the loss function and extended Eventprop to a wider class of loss functions to enable effective training. We then tested a large number of data augmentations and regularisations as well as exploring different network structures; and heterogeneous and trainable timescales. We found that when combined with two specific augmentations, the right regularisation and a delay line input, Eventprop networks with one recurrent layer achieved state-of-the-art performance on Spiking Heidelberg Digits and good accuracy on Spiking Speech Commands. In comparison to a leading surrogate-gradient-based SNN training method, our GeNN Eventprop implementation is 3X faster and uses 4X less memory. This work is a significant step towards a low-power neuromorphic alternative to current machine learning paradigms.
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
Keyword Recognition
Efficiency and Accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Eventprop
Spiking Neural Networks (SNN)
GPU-accelerated Learning
Thomas Nowotny
Thomas Nowotny
Professor of Informatics, University of Sussex
Computational NeuroscienceHybrid SystemsMachine Learning
J
James P. Turner
Information & Communication Technologies, Imperial College London, London, SW7 2AZ, UK
J
James C. Knight
School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK