🤖 AI Summary
To address the bottleneck of native end-to-end backpropagation (BP) training for Spiking Neural Networks (SNNs) on edge devices, this work proposes a multi-core neuromorphic architecture supporting direct BP training. Each core integrates dedicated forward-propagation, backward-propagation, and weight-gradient computation engines. We introduce a novel two-level parallelism—engine-level and core-level—and synergistically combine spike-driven sparse dataflow scheduling, mixed-precision (FP16) gradient computation, and an on-chip weight update engine. Evaluated on a 28 nm FPGA prototype, the architecture enables scalable SNN training across 20 cores and federated learning deployment across 5 nodes. It achieves 1.05 TFLOPS/W energy efficiency and reduces DRAM accesses by 55–85% compared to an NVIDIA A100 GPU. To our knowledge, this is the first hardware implementation to break the technical barrier for native SNN training at the edge.
📝 Abstract
There is a growing necessity for edge training to adapt to dynamically changing environment. Neuromorphic computing represents a significant pathway for high-efficiency intelligent computation in energy-constrained edges, but existing neuromorphic architectures lack the ability of directly training spiking neural networks (SNNs) based on backpropagation. We develop a multi-core neuromorphic architecture with Feedforward-Propagation, Back-Propagation, and Weight-Gradient engines in each core, supporting high efficient parallel computing at both the engine and core levels. It combines various data flows and sparse computation optimization by fully leveraging the sparsity in SNN training, obtaining a high energy efficiency of 1.05TFLOPS/W@ FP16 @ 28nm, 55 ~ 85% reduction of DRAM access compared to A100 GPU in SNN trainings, and a 20-core deep SNN training and a 5-worker federated learning on FPGAs. Our study develops the first multi-core neuromorphic architecture supporting the direct SNN training, facilitating the neuromorphic computing in edge-learnable applications.