Energy-Oriented Computing Architecture Simulator for SNN Training

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing training of Spiking Neural Networks (SNNs) lacks systematic performance–efficiency evaluation methodologies and efficient hardware support. Method: This work proposes the first energy-optimized SNN training architecture simulator, featuring a system-level energy modeling and simulation framework that tightly integrates spike sparsity, hardware-aware data representation, and energy-aware computation paradigms—enabling architecture-level energy–efficiency co-optimization at early design stages. A hardware prototype is implemented in Verilog HDL and synthesized using Synopsys Design Compiler with the TSMC 28 nm standard-cell library, followed by fine-grained power analysis. Results: On representative SNN training workloads, the proposed architecture achieves significantly higher energy efficiency than state-of-the-art DNN and SNN accelerators. Experimental validation confirms the feasibility of low-power architectures guided by this simulator.

Technology Category

Application Category

📝 Abstract
With the growing demand for intelligent computing, neuromorphic computing, a paradigm that mimics the structure and functionality of the human brain, offers a promising approach to developing new high-efficiency intelligent computing systems. Spiking Neural Networks (SNNs), the foundation of neuromorphic computing, have garnered significant attention due to their unique potential in energy efficiency and biomimetic neural processing. However, current hardware development for efficient SNN training lags significantly. No systematic energy evaluation methods exist for SNN training tasks. Therefore, this paper proposes an Energy-Oriented Computing Architecture Simulator (EOCAS) for SNN training to identify the optimal architecture. EOCAS investigates the high sparsity of spike signals, unique hardware design representations, energy assessment, and computation patterns to support energy optimization in various architectures. Under the guidance of EOCAS, we implement the power-aimed optimal hardware architecture through Verilog HDL and achieve low energy consumption using Synopsys Design Compiler with TSMC-28nm technology library under typical parameters. Compared with several State-Of-The-Art (SOTA) DNN and SNN works, our hardware architecture outstands others in various criteria.
Problem

Research questions and friction points this paper is trying to address.

Lack of systematic energy evaluation methods for SNN training
Need for optimal hardware architecture for efficient SNN training
High energy consumption in current SNN and DNN hardware designs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Energy-Oriented Computing Architecture Simulator for SNNs
Investigates spike signal sparsity and hardware design
Optimizes energy using Verilog HDL and TSMC-28nm
🔎 Similar Papers
No similar papers found.
Y
Yunhao Ma
Southern University of Science and Technology, and Pengcheng Laboratory, Shenzhen, China
W
Wanyi Jia
Shenzhen Institutes of Advanced Technology, the Chinese Academy of Sciences, University of Chinese Academy of Sciences, and Pengcheng Laboratory, Shenzhen, China
Y
Yanyu Lin
Pengcheng Laboratory, Shenzhen, China
W
Wenjie Lin
Pengcheng Laboratory, Shenzhen, China
X
Xueke Zhu
Pengcheng Laboratory, Shenzhen, China
Huihui Zhou
Huihui Zhou
PengCheng Laboratory
AI
Fengwei An
Fengwei An
Southern University of Science and Technology