🤖 AI Summary
This work addresses the inefficiency of parallel training in spiking neural networks (SNNs) caused by the temporal dependencies introduced by conventional reset mechanisms. To overcome this limitation, the authors propose the Dynamic Decay Spiking Neuron, which reinterprets the reset operation to preserve biological plausibility and sequential inference capabilities while enabling highly efficient large-scale parallel training for the first time. The design is compatible with diverse architectures—including CNNs, Transformers, and State Space Models—and supports multiple spiking activation modes. Experiments demonstrate a 25.6× speedup in training on sequences of length 16k, and models trained on 2k-length sequences generalize stably to 30k-length inputs. The approach achieves state-of-the-art performance across five benchmark tasks while exhibiting lower energy consumption.
📝 Abstract
The bio-inspired integrate-fire-reset mechanism of spiking neurons constitutes the foundation for efficient processing in Spiking Neural Networks (SNNs). Recent progress in large models demands that spiking neurons support highly parallel computation to scale efficiently on modern GPUs. This work proposes a novel functional perspective that provides general guidance for designing parallel spiking neurons. We argue that the reset mechanism, which induces complex temporal dependencies and hinders parallel training, should be removed. However, any such modification should satisfy two principles: 1) preserving the functions of reset as a core biological mechanism; and 2) enabling parallel training without sacrificing the serial inference ability of spiking neurons, which underpins their efficiency at test time. To this end, we identify the functions of the reset and analyze how to reconcile parallel training with serial inference, upon which we propose a dynamic decay spiking neuron. We conduct comprehensive testing of our method in terms of: 1) Training efficiency and extrapolation capability. On 16k-length sequences, we achieve a 25.6x training speedup over the pioneering parallel spiking neuron, and our models trained on 2k-length can stably perform inference on sequences as long as 30k. 2) Generality. We demonstrate the consistent effectiveness of the proposed method across five task categories (image classification, neuromorphic event processing, time-series forecasting, language modeling, and reinforcement learning), three network architectures (spiking CNN/Transformer/SSMs), and two spike activation modes (spike/integer activation). 3) Energy consumption. The spiking firing of our neuron is lower than that of vanilla and existing parallel spiking neurons.