FeNN-DMA: A RISC-V SoC for SNN acceleration

📅 2025-11-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low arithmetic intensity and poor computational efficiency of Spiking Neural Networks (SNNs) on general-purpose accelerators (e.g., GPUs/TPUs), this paper proposes FeNN-DMA—a programmable System-on-Chip (SoC) tailored for FPGAs. Built upon a RISC-V processor core, FeNN-DMA innovatively integrates a custom Direct Memory Access (DMA) engine with a sparse-friendly on-chip memory architecture, significantly improving SNN simulation throughput and model scalability. Unlike conventional approaches, FeNN-DMA enables flexible deployment of large-scale, heterogeneous SNNs while maintaining low logic resource utilization and power consumption. Evaluated on the Spiking Heidelberg Digits (SHD) and Neuromorphic MNIST (N-MNIST) benchmarks, FeNN-DMA achieves state-of-the-art classification accuracy and matches the energy efficiency and hardware resource utilization of dedicated SNN accelerators.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) are a promising, energy-efficient alternative to standard Artificial Neural Networks (ANNs) and are particularly well-suited to spatio-temporal tasks such as keyword spotting and video classification. However, SNNs have a much lower arithmetic intensity than ANNs and are therefore not well-matched to standard accelerators like GPUs and TPUs. Field Programmable Gate Arrays(FPGAs) are designed for such memory-bound workloads and here we develop a novel, fully-programmable RISC-V-based system-on-chip (FeNN-DMA), tailored to simulating SNNs on modern UltraScale+ FPGAs. We show that FeNN-DMA has comparable resource usage and energy requirements to state-of-the-art fixed-function SNN accelerators, yet it is capable of simulating much larger and more complex models. Using this functionality, we demonstrate state-of-the-art classification accuracy on the Spiking Heidelberg Digits and Neuromorphic MNIST tasks.
Problem

Research questions and friction points this paper is trying to address.

Accelerating Spiking Neural Networks efficiently
Overcoming low arithmetic intensity mismatch with GPUs
Enabling larger SNN models on FPGA platforms
Innovation

Methods, ideas, or system contributions that make the work stand out.

RISC-V SoC for SNN acceleration on FPGAs
Programmable design enables complex model simulation
Optimized for memory-bound spiking neural workloads
🔎 Similar Papers
No similar papers found.
Z
Zainab Aizaz
School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK
J
James C. Knight
School of Engineering and Informatics, University of Sussex, Brighton, BN1 9QJ, UK
Thomas Nowotny
Thomas Nowotny
Professor of Informatics, University of Sussex
Computational NeuroscienceHybrid SystemsMachine Learning