When Pipelined In-Memory Accelerators Meet Spiking Direct Feedback Alignment: A Co-Design for Neuromorphic Edge Computing

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training spiking neural networks (SNNs) on edge devices remains challenging due to the high computational overhead of conventional spike-based backpropagation. To address this, we propose PipeSDFA, a hardware–software co-design framework. Methodologically, PipeSDFA is the first to tightly integrate Spiking Direct Feedback Alignment (SDFA) with RRAM-based in-memory computing, featuring a three-stage pipelined dataflow that eliminates reliance on error backpropagation—enabling highly parallelized, low-latency weight updates. Experimental evaluation across five benchmark datasets demonstrates that PipeSDFA incurs <2% accuracy degradation while achieving 1.1×–10.5× faster training and 1.37×–2.1× lower energy consumption compared to PipeLayer. These results significantly advance the energy efficiency and real-time capability of brain-inspired SNN training at the edge.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) are increasingly favored for deployment on resource-constrained edge devices due to their energy-efficient and event-driven processing capabilities. However, training SNNs remains challenging because of the computational intensity of traditional backpropagation algorithms adapted for spike-based systems. In this paper, we propose a novel software-hardware co-design that introduces a hardware-friendly training algorithm, Spiking Direct Feedback Alignment (SDFA) and implement it on a Resistive Random Access Memory (RRAM)-based In-Memory Computing (IMC) architecture, referred to as PipeSDFA, to accelerate SNN training. Software-wise, the computational complexity of SNN training is reduced by the SDFA through the elimination of sequential error propagation. Hardware-wise, a three-level pipelined dataflow is designed based on IMC architecture to parallelize the training process. Experimental results demonstrate that the PipeSDFA training accelerator incurs less than 2% accuracy loss on five datasets compared to baselines, while achieving 1.1X~10.5X and 1.37X~2.1X reductions in training time and energy consumption, respectively compared to PipeLayer.
Problem

Research questions and friction points this paper is trying to address.

Reducing SNN training complexity with Spiking Direct Feedback Alignment
Accelerating SNN training via RRAM-based in-memory computing architecture
Minimizing accuracy loss while cutting training time and energy use
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spiking Direct Feedback Alignment for SNN training
RRAM-based In-Memory Computing architecture
Three-level pipelined dataflow for parallel training
🔎 Similar Papers
No similar papers found.
H
Haoxiong Ren
State Key Laboratory of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China
Y
Yangu He
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
Kwunhang Wong
Kwunhang Wong
The University of Hong Kong
Differential PrivacyHardware Security
Rui Bao
Rui Bao
Ocean University of China
Organic carbon cycleRadiocarbonHalal zoneMolecular biogeochemistryNovel analytical methods
Ning Lin
Ning Lin
Princeton University
HurricanesStorm SurgeClimate AdaptationCoastal ResilienceRisk Analysis
Z
Zhongrui Wang
School of Microelectronics, Southern University of Science and Technology, Shenzhen, China
D
Dashan Shang
State Key Laboratory of Fabrication Technologies for Integrated Circuits, Institute of Microelectronics, Chinese Academy of Sciences, Beijing, China