🤖 AI Summary
Training spiking neural networks (SNNs) on edge devices remains challenging due to the high computational overhead of conventional spike-based backpropagation. To address this, we propose PipeSDFA, a hardware–software co-design framework. Methodologically, PipeSDFA is the first to tightly integrate Spiking Direct Feedback Alignment (SDFA) with RRAM-based in-memory computing, featuring a three-stage pipelined dataflow that eliminates reliance on error backpropagation—enabling highly parallelized, low-latency weight updates. Experimental evaluation across five benchmark datasets demonstrates that PipeSDFA incurs <2% accuracy degradation while achieving 1.1×–10.5× faster training and 1.37×–2.1× lower energy consumption compared to PipeLayer. These results significantly advance the energy efficiency and real-time capability of brain-inspired SNN training at the edge.
📝 Abstract
Spiking Neural Networks (SNNs) are increasingly favored for deployment on resource-constrained edge devices due to their energy-efficient and event-driven processing capabilities. However, training SNNs remains challenging because of the computational intensity of traditional backpropagation algorithms adapted for spike-based systems. In this paper, we propose a novel software-hardware co-design that introduces a hardware-friendly training algorithm, Spiking Direct Feedback Alignment (SDFA) and implement it on a Resistive Random Access Memory (RRAM)-based In-Memory Computing (IMC) architecture, referred to as PipeSDFA, to accelerate SNN training. Software-wise, the computational complexity of SNN training is reduced by the SDFA through the elimination of sequential error propagation. Hardware-wise, a three-level pipelined dataflow is designed based on IMC architecture to parallelize the training process. Experimental results demonstrate that the PipeSDFA training accelerator incurs less than 2% accuracy loss on five datasets compared to baselines, while achieving 1.1X~10.5X and 1.37X~2.1X reductions in training time and energy consumption, respectively compared to PipeLayer.