🤖 AI Summary
This work addresses the energy-efficiency bottleneck in training and deploying Spiking Neural Networks (SNNs) for neuromorphic edge AI. We present the first GPU-accelerated implementation of the Eventprop algorithm within the mlGeNN framework, enabling event-driven backpropagation and end-to-end deployment onto Intel Loihi 2 neuromorphic chips. Compared to conventional GPU/CPU-based inference, our approach achieves near-lossless accuracy on keyword spotting (<0.5% degradation), a 10× speedup in per-sample inference latency, and reduces energy consumption to just 0.5% of that of a Jetson Orin Nano. Our key contribution is establishing the first complete, efficient pipeline—from Eventprop training in mlGeNN to hardware deployment on Loihi 2—thereby significantly lowering both training and inference energy costs. This advances the practical deployment of high-efficiency SNNs at the edge, facilitating the migration of AI workloads from cloud to resource-constrained neuromorphic platforms.
📝 Abstract
Neuromorphic computing can reduce the energy requirements of neural networks and holds the promise to `repatriate' AI workloads back from the cloud to the edge. However, training neural networks on neuromorphic hardware has remained elusive. Here, we instead present a pipeline for training spiking neural networks on GPUs, using the efficient event-driven Eventprop algorithm implemented in mlGeNN, and deploying them on Intel's Loihi 2 neuromorphic chip. Our benchmarking on keyword spotting tasks indicates that there is almost no loss in accuracy between GPU and Loihi 2 implementations and that classifying a sample on Loihi 2 is up to 10X faster and uses 200X less energy than on an NVIDIA Jetson Orin Nano.