π€ AI Summary
Mainstream accelerators (e.g., GPUs/TPUs) are optimized for artificial neural networks (ANNs) with high arithmetic intensity, rendering them inefficient and power-hungry for spiking neural network (SNN) simulation. To address this, we propose FeNNβa programmable RISC-V vector processor tailored for FPGAs and specifically optimized for SNN emulation. Our key contributions are: (1) the first application of a programmable RISC-V vector architecture to SNN acceleration; and (2) the integration of stochastic rounding and fixed-point saturation mechanisms, achieving high numerical fidelity with minimal hardware overhead. Implemented as a soft-core processor, FeNN supports scalable deployment across edge-to-cloud scenarios. Experimental results demonstrate that a single FeNN core outperforms embedded GPUs and Intel Loihi in SNN inference latency, while delivering substantial improvements in energy efficiency and FPGA resource utilization. FeNN establishes a highly adaptable hardware paradigm for low-power neuromorphic computing.
π Abstract
Spiking Neural Networks (SNNs) have the potential to drastically reduce the energy requirements of AI systems. However, mainstream accelerators like GPUs and TPUs are designed for the high arithmetic intensity of standard ANNs so are not well-suited to SNN simulation. FPGAs are well-suited to applications with low arithmetic intensity as they have high off-chip memory bandwidth and large amounts of on-chip memory. Here, we present a novel RISC-V-based soft vector processor (FeNN), tailored to simulating SNNs on FPGAs. Unlike most dedicated neuromorphic hardware, FeNN is fully programmable and designed to be integrated with applications running on standard computers from the edge to the cloud. We demonstrate that, by using stochastic rounding and saturation, FeNN can achieve high numerical precision with low hardware utilisation and that a single FeNN core can simulate an SNN classifier faster than both an embedded GPU and the Loihi neuromorphic system.