YANA: Bridging the Neuromorphic Simulation-to-Hardware Gap

๐Ÿ“… 2026-04-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the significant gap between simulation and real neuromorphic hardware that hinders algorithmic innovation and hardware-software co-design in spiking neural networks (SNNs). To bridge this divide, the authors propose YANA, an open-source, FPGA-based digital SNN accelerator featuring a five-stage event-driven pipeline and a steady-state single-cycle event processing mechanism. YANA supports arbitrary network topologies and point-to-point connectivity while efficiently exploiting spatiotemporal sparsity. It implements low-overhead leaky integration via lookup tables and integrates the Neuromorphic Intermediate Representation (NIR) on the AMD Kria KR260 platform, enabling an end-to-end workflow from training to deployment. Experiments demonstrate near-linear scaling of inference latency with sparsity; a single core supports up to 2ยนโท synapses and 2ยนโฐ neurons with minimal resource utilization, and the entire system is fully open-source.
๐Ÿ“ Abstract
Spiking Neural Networks (SNNs) promise significant advantages over conventional Artificial Neural Networks (ANNs) for applications requiring real-time processing of temporally sparse data streams under strict power constraints -- a concept known as the Neuromorphic Advantage. However, the limited availability of neuromorphic hardware creates a substantial simulation-to-hardware gap that impedes algorithmic innovation, hardware-software co-design, and the development of mature open-source ecosystems. To address this challenge, we introduce Yet Another Neuromorphic Accelerator (YANA), an FPGA-based digital SNN accelerator designed to bridge this gap by providing an accessible hardware and software framework for neuromorphic computing. YANA implements a five-stage, event-driven processing pipeline that fully exploits temporal and spatial sparsity while supporting arbitrary SNN topologies through point-to-point neuron connections. The architecture features an input preprocessing scheme that maintains steady event processing at one event per cycle without buffer overflow risks, and implements hardware-efficient event-driven neuron updates using lookup tables for leak calculations. We demonstrate YANA's sparsity exploitation capabilities through experiments on the Spiking Heidelberg Digits dataset, showing near-linear scaling of inference time with both spatial and temporal sparsity levels. Deployed on the accessible AMD Kria KR260 platform, a single YANA core utilizes 740 LUTs, 918 registers, 7 BRAMS and 24 URAMs, supporting up to $2^{17}$ synapses and $2^{10}$ neurons. We release the YANA framework as an open-source project, providing an end-to-end solution for training, optimizing, and deploying SNNs that integrates with existing neuromorphic computing tools through the Neuromorphic Intermediate Representation (NIR).
Problem

Research questions and friction points this paper is trying to address.

Neuromorphic Computing
Simulation-to-Hardware Gap
Spiking Neural Networks
Hardware-Software Co-design
Open-source Ecosystem
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spiking Neural Networks
Neuromorphic Computing
FPGA Accelerator
Event-Driven Processing
Sparsity Exploitation
๐Ÿ”Ž Similar Papers
No similar papers found.
B
Brian Pachideh
FZI Research Center for Information Technology, Karlsruhe, Germany
S
Sven Nitzsche
FZI Research Center for Information Technology, Karlsruhe, Germany
M
Moritz Neher
FZI Research Center for Information Technology, Karlsruhe, Germany
J
Jann Krausse
Infineon Technologies, Dresden, Germany
Carmen Weigelt
Carmen Weigelt
Associate Professor, Tulane University
strategysustainability
K
Klaus Knobloch
Infineon Technologies, Dresden, Germany
V
Victor Pazmino Betancourt
FZI Research Center for Information Technology, Karlsruhe, Germany
Juergen Becker
Juergen Becker
Karlsruhe Institute of Technology