🤖 AI Summary
To address the low training efficiency and high energy consumption of spiking neural networks (SNNs) on neuromorphic hardware, this paper proposes the first event-driven backpropagation training framework tailored for the SpiNNaker2 chip. Methodologically, it implements EventProp—the first such realization on SpiNNaker2—by integrating a discretized leaky integrate-and-fire (LIF) neuron model with adjoint equations to enable on-chip, batch-parallel, and sparse-gradient computation. It introduces an event-packet mechanism for synchronized transmission of spikes and error signals, and designs a hybrid on-chip/off-chip training architecture. Experiments on the Yin-Yang dataset demonstrate full-layer on-chip SNN training feasibility, achieving significant improvements in both energy efficiency and training accuracy. This work fills a critical gap in efficient SNN training frameworks for SpiNNaker2 and establishes a new paradigm for online learning and rapid prototyping on neuromorphic chips.
📝 Abstract
Neuromorphic computing aims to replicate the brain's capabilities for energy efficient and parallel information processing, promising a solution to the increasing demand for faster and more efficient computational systems. Efficient training of neural networks on neuromorphic hardware requires the development of training algorithms that retain the sparsity of spike-based communication during training. Here, we report on the first implementation of event-based backpropagation on the SpiNNaker2 neuromorphic hardware platform. We use EventProp, an algorithm for event-based backpropagation in spiking neural networks (SNNs), to compute exact gradients using sparse communication of error signals between neurons. Our implementation computes multi-layer networks of leaky integrate-and-fire neurons using discretized versions of the differential equations and their adjoints, and uses event packets to transmit spikes and error signals between network layers. We demonstrate a proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin Yang dataset, and provide an off-chip implementation for efficient prototyping, hyper-parameter search, and hybrid training methods.