Optimising Event-Driven Spiking Neural Network with Regularisation and Cutoff

📅 2023-01-23
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the low inference efficiency and high temporal overhead of Spiking Neural Networks (SNNs), this paper proposes a dynamic inference termination mechanism that adaptively halts event-driven inference while preserving accuracy. Our key contributions are: (1) the first Top-K dynamic termination strategy, which determines inference completion in real time based on neuronal activation strength; (2) termination-aware regularization, explicitly incorporating termination behavior into the training objective to ensure training–inference consistency; and (3) full compatibility with both ANN-to-SNN conversion and direct SNN training paradigms. Evaluated on CIFAR-10 (frame-based data), our method reduces average simulation timesteps by 2.26×; on CIFAR10-DVS (event-based data), it achieves a 1.79× reduction. Accuracy degradation remains below 0.3%, yielding substantial improvements in energy efficiency.
📝 Abstract
Spiking neural network (SNN), as the next generation of artificial neural network (ANN), offer a closer mimicry of natural neural networks and hold promise for significant improvements in computational efficiency. However, the current SNN is trained to infer over a fixed duration, overlooking the potential of dynamic inference in SNN. In this paper, we strengthen the marriage between SNN and event-driven processing with a proposal to consider a cutoff in SNN, which can terminate SNN anytime during inference to achieve efficient inference. Two novel optimisation techniques are presented to achieve inference efficient SNN: a Top-K cutoff and a regularisation.The proposed regularisation influences the training process, optimising SNN for the cutoff, while the Top-K cutoff technique optimises the inference phase. We conduct an extensive set of experiments on multiple benchmark frame-based datasets, such asCIFAR10/100, Tiny-ImageNet, and event-based datasets, including CIFAR10-DVS, N-Caltech101 and DVS128 Gesture. The experimental results demonstrate the effectiveness of our techniques in both ANN-to-SNN conversion and direct training, enabling SNNs to require 1.76 to 2.76x fewer timesteps for CIFAR-10, while achieving 1.64 to 1.95x fewer timesteps across all event-based datasets, with near-zero accuracy loss. These findings affirms the compatibility and potential benefits of our techniques in enhancing accuracy and reducing inference latency when integrated with existing methods. Code available: https://github.com/Dengyu-Wu/SNNCutoff
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
Efficiency Optimization
Accuracy Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Top-K cutoff
Regularization technique
Spiking Neural Networks (SNN) efficiency enhancement
🔎 Similar Papers
No similar papers found.