Event-based backpropagation on the neuromorphic platform SpiNNaker2

📅 2024-12-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low training efficiency and high energy consumption of spiking neural networks (SNNs) on neuromorphic hardware, this paper proposes the first event-driven backpropagation training framework tailored for the SpiNNaker2 chip. Methodologically, it implements EventProp—the first such realization on SpiNNaker2—by integrating a discretized leaky integrate-and-fire (LIF) neuron model with adjoint equations to enable on-chip, batch-parallel, and sparse-gradient computation. It introduces an event-packet mechanism for synchronized transmission of spikes and error signals, and designs a hybrid on-chip/off-chip training architecture. Experiments on the Yin-Yang dataset demonstrate full-layer on-chip SNN training feasibility, achieving significant improvements in both energy efficiency and training accuracy. This work fills a critical gap in efficient SNN training frameworks for SpiNNaker2 and establishes a new paradigm for online learning and rapid prototyping on neuromorphic chips.

Technology Category

Application Category

📝 Abstract
Neuromorphic computing aims to replicate the brain's capabilities for energy efficient and parallel information processing, promising a solution to the increasing demand for faster and more efficient computational systems. Efficient training of neural networks on neuromorphic hardware requires the development of training algorithms that retain the sparsity of spike-based communication during training. Here, we report on the first implementation of event-based backpropagation on the SpiNNaker2 neuromorphic hardware platform. We use EventProp, an algorithm for event-based backpropagation in spiking neural networks (SNNs), to compute exact gradients using sparse communication of error signals between neurons. Our implementation computes multi-layer networks of leaky integrate-and-fire neurons using discretized versions of the differential equations and their adjoints, and uses event packets to transmit spikes and error signals between network layers. We demonstrate a proof-of-concept of batch-parallelized, on-chip training of SNNs using the Yin Yang dataset, and provide an off-chip implementation for efficient prototyping, hyper-parameter search, and hybrid training methods.
Problem

Research questions and friction points this paper is trying to address.

Neural Network Training
Energy Efficiency
Parallel Processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

EventProp algorithm
neuromorphic computing
energy-efficient training
🔎 Similar Papers
No similar papers found.
B
B'ena Gabriel
SpiNNcloud Systems, Dresden, Germany
T
Timo Wunderlich
Universitätsmedizin Berlin, Germany
M
Mahmoud Akl
SpiNNcloud Systems, Dresden, Germany
Bernhard Vogginger
Bernhard Vogginger
Technische Universität Dresden
neuromorphic hardwaredeep learningcomputational neuroscience
Christian Mayr
Christian Mayr
Professor, Technische Universität Dresden
neuromorphic engineeringbrain machine interfacesAnalog-to-Digital ConverterMPSoC
H
Hector Andres Gonzalez
SpiNNcloud Systems, Dresden, Germany; TU Dresden, Germany; ScaDS.AI Dresden/Leipzig, Germany