🤖 AI Summary
To address the suboptimal accuracy and energy efficiency of spiking neural networks (SNNs) stemming from direct adoption of artificial neural network (ANN) architectures, this paper proposes a lightweight neural architecture search (NAS) method tailored for edge devices. Our approach features three key contributions: (1) a training-free pruning-based NAS mechanism that drastically reduces search overhead; (2) a spike-aware Hamming distance fitness metric that precisely quantifies spatiotemporal sparsity in SNNs; and (3) a cell-level search space incorporating backward connections to explicitly model the dynamic sparsity inherent to SNNs. Evaluated on CIFAR-10/100 and DVS128-Gesture, our method achieves state-of-the-art accuracy—improving classification accuracy by 4.49% on DVS128-Gesture—and accelerates architecture search by 98× over SNASNet, outperforming the best baseline by 30% in search efficiency.
📝 Abstract
Spiking Neural Networks (SNNs) are highly regarded for their energy efficiency, inherent activation sparsity, and suitability for real-time processing in edge devices. However, most current SNN methods adopt architectures resembling traditional artificial neural networks (ANNs), leading to suboptimal performance when applied to SNNs. While SNNs excel in energy efficiency, they have been associated with lower accuracy levels than traditional ANNs when utilizing conventional architectures. In response, in this work we present LightSNN, a rapid and efficient Neural Network Architecture Search (NAS) technique specifically tailored for SNNs that autonomously leverages the most suitable architecture, striking a good balance between accuracy and efficiency by enforcing sparsity. Based on the spiking NAS network (SNASNet) framework, a cell-based search space including backward connections is utilized to build our training-free pruning-based NAS mechanism. Our technique assesses diverse spike activation patterns across different data samples using a sparsity-aware Hamming distance fitness evaluation. Thorough experiments are conducted on both static (CIFAR10 and CIFAR100) and neuromorphic datasets (DVS128-Gesture). Our LightSNN model achieves state-of-the-art results on CIFAR10 and CIFAR100, improves performance on DVS128Gesture by 4.49%, and significantly reduces search time, most notably offering a 98x speedup over SNASNet and running 30% faster than the best existing method on DVS128Gesture.