🤖 AI Summary
This work addresses the limited representational capacity of spiking neural networks (SNNs) caused by the inherent inconsistency in temporal spike dynamics. To overcome this, the authors propose a dual-consistency optimization framework: first, a hardware-friendly bitwise AND operation is employed to efficiently decouple a stable spike skeleton from multi-timestep spike trains, enforcing unstable spikes to converge toward this consistent structure and thereby enhancing temporal coherence; second, magnitude-aware spike noise is introduced to enrich representational diversity and improve generalization. Notably, this is the first approach to leverage bitwise AND operations for both spike skeleton extraction and noise injection. The method achieves significant performance gains under ultra-low latency settings across diverse architectures and datasets, with accuracy improvements up to 8.33%, effectively unlocking the energy efficiency and speed potential of SNNs.
📝 Abstract
Although the temporal spike dynamics of spiking neural networks (SNNs) enable low-power temporal pattern capture capabilities, they also incur inherent inconsistencies that severely compromise representation. In this paper, we perform dual consistency optimization via Stable Spike to mitigate this problem, thereby improving the recognition performance of SNNs. With the hardware-friendly ``AND" bit operation, we efficiently decouple the stable spike skeleton from the multi-timestep spike maps, thereby capturing critical semantics while reducing inconsistencies from variable noise spikes. Enforcing the unstable spike maps to converge to the stable spike skeleton significantly improves the inherent consistency across timesteps. Furthermore, we inject amplitude-aware spike noise into the stable spike skeleton to diversify the representations while preserving consistent semantics. The SNN is encouraged to produce perturbation-consistent predictions, thereby contributing to generalization. Extensive experiments across multiple architectures and datasets validate the effectiveness and versatility of our method. In particular, our method significantly advances neuromorphic object recognition under ultra-low latency, improving accuracy by up to 8.33\%. This will help unlock the full power consumption and speed potential of SNNs.