🤖 AI Summary
Existing training of Spiking Neural Networks (SNNs) lacks systematic performance–efficiency evaluation methodologies and efficient hardware support. Method: This work proposes the first energy-optimized SNN training architecture simulator, featuring a system-level energy modeling and simulation framework that tightly integrates spike sparsity, hardware-aware data representation, and energy-aware computation paradigms—enabling architecture-level energy–efficiency co-optimization at early design stages. A hardware prototype is implemented in Verilog HDL and synthesized using Synopsys Design Compiler with the TSMC 28 nm standard-cell library, followed by fine-grained power analysis. Results: On representative SNN training workloads, the proposed architecture achieves significantly higher energy efficiency than state-of-the-art DNN and SNN accelerators. Experimental validation confirms the feasibility of low-power architectures guided by this simulator.
📝 Abstract
With the growing demand for intelligent computing, neuromorphic computing, a paradigm that mimics the structure and functionality of the human brain, offers a promising approach to developing new high-efficiency intelligent computing systems. Spiking Neural Networks (SNNs), the foundation of neuromorphic computing, have garnered significant attention due to their unique potential in energy efficiency and biomimetic neural processing. However, current hardware development for efficient SNN training lags significantly. No systematic energy evaluation methods exist for SNN training tasks. Therefore, this paper proposes an Energy-Oriented Computing Architecture Simulator (EOCAS) for SNN training to identify the optimal architecture. EOCAS investigates the high sparsity of spike signals, unique hardware design representations, energy assessment, and computation patterns to support energy optimization in various architectures. Under the guidance of EOCAS, we implement the power-aimed optimal hardware architecture through Verilog HDL and achieve low energy consumption using Synopsys Design Compiler with TSMC-28nm technology library under typical parameters. Compared with several State-Of-The-Art (SOTA) DNN and SNN works, our hardware architecture outstands others in various criteria.