π€ AI Summary
Existing spike encoding schemes suffer under low-timestep constraints: rate- and timing-based codes exhibit low information density, while high-expressivity alternatives often rely on complex neuron dynamics that compromise scalability. To address this, we propose Canonic Signed Spike (CSS) encodingβa novel scheme that jointly integrates signed (excitatory/inhibitory) spikes, nonlinear spike weighting, and the Ternary Self-Amplifying (TSA) neuron model, augmented with a first-order refractory period mechanism to break linear encoding bottlenecks. Coupled with an ANN-to-SNN direct conversion framework, CSS achieves up to 5Γ inference timestep compression on CIFAR-10 and ImageNet, with <0.5% accuracy degradation. This advancement significantly enhances both the temporal efficiency and practical deployability of spiking neural networks.
π Abstract
Spiking Neural Networks (SNNs) seek to mimic the spiking behavior of biological neurons and are expected to play a key role in the advancement of neural computing and artificial intelligence. The conversion of Artificial Neural Networks (ANNs) to SNNs is the most widely used training method, which ensures that the resulting SNNs perform comparably to ANNs on large-scale datasets. The efficiency of these conversion-based SNNs is often determined by the neural coding schemes. Current schemes typically use spike count or timing for encoding, which is linearly related to ANN activations and increases the required number of time steps. To address this limitation, we propose a novel Canonic Signed Spike (CSS) coding scheme. This method incorporates non-linearity into the encoding process by weighting spikes at each step of neural computation, thereby increasing the information encoded in spikes. We identify the temporal coupling phenomenon arising from weighted spikes and introduce negative spikes along with a Ternary Self-Amplifying (TSA) neuron model to mitigate the issue. A one-step silent period is implemented during neural computation, achieving high accuracy with low latency. We apply the proposed methods to directly convert full-precision ANNs and evaluate performance on CIFAR-10 and ImageNet datasets. Our experimental results demonstrate that the CSS coding scheme effectively compresses time steps for coding and reduces inference latency with minimal conversion loss.