🤖 AI Summary
To address the low training efficiency, high memory overhead, and reliance on backpropagation in spiking neural networks (SNNs) deployed on neuromorphic chips, this paper proposes a lightweight encoding–decoding framework grounded in sparse coding and the Local Competitive Algorithm (LCA). The framework introduces the first scalable, instance-wise LCA decoder, eliminating gradient-based backpropagation entirely—thereby drastically reducing computational and memory requirements while enabling hardware-friendly training on datasets of arbitrary scale. By synergistically integrating sparse representation learning with local competition dynamics, the method achieves state-of-the-art top-1 accuracies of 80.75% on ImageNet and 79.32% on CIFAR-100—the highest publicly reported results for SNNs to date. This work marks a significant advance toward efficient, scalable, and neuromorphically native SNN training paradigms.
📝 Abstract
Neuromorphic computing has recently gained significant attention as a promising approach for developing energy-efficient, massively parallel computing systems inspired by the spiking behavior of the human brain and natively mapping Spiking Neural Networks (SNNs). Effective training algorithms for SNNs are imperative for increased adoption of neuromorphic platforms; however, SNN training continues to lag behind advances in other classes of ANN. In this paper, we reduce this gap by proposing an innovative encoder-decoder technique that leverages sparse coding and the Locally Competitive Algorithm (LCA) to provide an algorithm specifically designed for neuromorphic platforms. Using our proposed Dataset-Scalable Exemplar LCA-Decoder we reduce the computational demands and memory requirements associated with training SNNs using error backpropagation methods on increasingly larger training sets. We offer a solution that can be scalably applied to datasets of any size. Our results show the highest reported top-1 test accuracy using SNNs on the ImageNet and CIFAR100 datasets, surpassing previous benchmarks. Specifically, we achieved a record top-1 accuracy of 80.75% on ImageNet (ILSVRC2012 validation set) and 79.32% on CIFAR100 using SNNs.