🤖 AI Summary
To address the low arithmetic intensity and poor computational efficiency of Spiking Neural Networks (SNNs) on general-purpose accelerators (e.g., GPUs/TPUs), this paper proposes FeNN-DMA—a programmable System-on-Chip (SoC) tailored for FPGAs. Built upon a RISC-V processor core, FeNN-DMA innovatively integrates a custom Direct Memory Access (DMA) engine with a sparse-friendly on-chip memory architecture, significantly improving SNN simulation throughput and model scalability. Unlike conventional approaches, FeNN-DMA enables flexible deployment of large-scale, heterogeneous SNNs while maintaining low logic resource utilization and power consumption. Evaluated on the Spiking Heidelberg Digits (SHD) and Neuromorphic MNIST (N-MNIST) benchmarks, FeNN-DMA achieves state-of-the-art classification accuracy and matches the energy efficiency and hardware resource utilization of dedicated SNN accelerators.
📝 Abstract
Spiking Neural Networks (SNNs) are a promising, energy-efficient alternative to standard Artificial Neural Networks (ANNs) and are particularly well-suited to spatio-temporal tasks such as keyword spotting and video classification. However, SNNs have a much lower arithmetic intensity than ANNs and are therefore not well-matched to standard accelerators like GPUs and TPUs. Field Programmable Gate Arrays(FPGAs) are designed for such memory-bound workloads and here we develop a novel, fully-programmable RISC-V-based system-on-chip (FeNN-DMA), tailored to simulating SNNs on modern UltraScale+ FPGAs. We show that FeNN-DMA has comparable resource usage and energy requirements to state-of-the-art fixed-function SNN accelerators, yet it is capable of simulating much larger and more complex models. Using this functionality, we demonstrate state-of-the-art classification accuracy on the Spiking Heidelberg Digits and Neuromorphic MNIST tasks.