🤖 AI Summary
Spiking Neural Networks (SNNs) suffer from limited representational capacity due to constrained spiking neuron dynamics, leading to substantial performance gaps versus Artificial Neural Networks (ANNs) in image classification and object detection. To address this, we propose Integer Binary Leaky Integrate-and-Fire (IB-LIF) neurons, introducing the first integer-domain binary LIF mechanism coupled with a dynamic range alignment strategy. This synergy enables virtual temporal expansion and high-magnitude spike activation, effectively overcoming SNN representational bottlenecks. Crucially, our method preserves intrinsic spiking computation and ultra-low-power advantages. On ImageNet, the resulting SNN achieves 74.19% top-1 accuracy—surpassing prior state-of-the-art by 3.45%. On COCO, it attains 66.2% mAP@50 and 49.1% mAP@50:95, outperforming previous best results by 1.6% and 1.8%, respectively. Moreover, it delivers a 6.3× improvement in energy efficiency.
📝 Abstract
Spiking Neural Networks (SNNs) are noted for their brain-like computation and energy efficiency, but their performance lags behind Artificial Neural Networks (ANNs) in tasks like image classification and object detection due to the limited representational capacity. To address this, we propose a novel spiking neuron, Integer Binary-Range Alignment Leaky Integrate-and-Fire to exponentially expand the information expression capacity of spiking neurons with only a slight energy increase. This is achieved through Integer Binary Leaky Integrate-and-Fire and range alignment strategy. The Integer Binary Leaky Integrate-and-Fire allows integer value activation during training and maintains spike-driven dynamics with binary conversion expands virtual timesteps during inference. The range alignment strategy is designed to solve the spike activation limitation problem where neurons fail to activate high integer values. Experiments show our method outperforms previous SNNs, achieving 74.19% accuracy on ImageNet and 66.2% mAP@50 and 49.1% mAP@50:95 on COCO, surpassing previous bests with the same architecture by +3.45% and +1.6% and +1.8%, respectively. Notably, our SNNs match or exceed ANNs' performance with the same architecture, and the energy efficiency is improved by 6.3${ imes}$.