🤖 AI Summary
Spiking Neural Networks (SNNs) face fundamental bottlenecks: high training overhead, excessive inference latency, and poor scalability to complex tasks. To address these, we propose the first learning framework integrating parallel spike computation with ANN-to-SNN conversion. Our method establishes a rigorous mathematical mapping between per-timestep spike rates and cumulative spike counts, enabling training-free, activation-function-agnostic conversion. We theoretically prove that this conversion is lossless and order-preserving. Furthermore, we introduce an optimal time-shift distance estimator and a distribution-aware error calibration mechanism to minimize conversion-induced approximation errors. Evaluated across diverse architectures and vision/speech benchmarks, our approach achieves state-of-the-art accuracy under ultra-low latency—requiring only 1–4 timesteps—significantly outperforming STBP and mainstream conversion methods.
📝 Abstract
Spiking Neural Network (SNN), as a brain-inspired and energy-efficient network, is currently facing the pivotal challenge of exploring a suitable and efficient learning framework. The predominant training methodologies, namely Spatial-Temporal Back-propagation (STBP) and ANN-SNN Conversion, are encumbered by substantial training overhead or pronounced inference latency, which impedes the advancement of SNNs in scaling to larger networks and navigating intricate application domains. In this work, we propose a novel parallel conversion learning framework, which establishes a mathematical mapping relationship between each time-step of the parallel spiking neurons and the cumulative spike firing rate. We theoretically validate the lossless and sorting properties of the conversion process, as well as pointing out the optimal shifting distance for each step. Furthermore, by integrating the above framework with the distribution-aware error calibration technique, we can achieve efficient conversion towards more general activation functions or training-free circumstance. Extensive experiments have confirmed the significant performance advantages of our method for various conversion cases under ultra-low time latency. To our best knowledge, this is the first work which jointly utilizes parallel spiking calculation and ANN-SNN Conversion, providing a highly promising approach for SNN supervised training.