🤖 AI Summary
To address the high energy consumption of spiking neural networks (SNNs) caused by structural redundancy, this paper proposes CH-SNN, a dynamic sparse training framework that achieves, for the first time, full-range ultra-sparsification (>99% sparsity) of linear layers without accuracy degradation. Methodologically, CH-SNN integrates spike-correlation-driven topology and weight initialization, a hybrid pruning scoring mechanism, and the CH³-L³ synaptic regeneration automaton—inspired by Cannistraci–Hebbian learning—to enable synergistic “pruning–regeneration” optimization. Evaluated across six benchmark datasets (including CIFAR-10) and multiple SNN architectures, CH-SNN reduces parameter count by two orders of magnitude relative to dense baselines, while preserving or even improving inference accuracy and significantly enhancing energy efficiency. This work establishes a scalable, structurally sparse paradigm for low-power neuromorphic computing.
📝 Abstract
Inspired by the brain's spike-based computation, spiking neural networks (SNNs) inherently possess temporal activation sparsity. However, when it comes to the sparse training of SNNs in the structural connection domain, existing methods fail to achieve ultra-sparse network structures without significant performance loss, thereby hindering progress in energy-efficient neuromorphic computing. This limitation presents a critical challenge: how to achieve high levels of structural connection sparsity while maintaining performance comparable to fully connected networks. To address this challenge, we propose the Cannistraci-Hebb Spiking Neural Network (CH-SNN), a novel and generalizable dynamic sparse training framework for SNNs consisting of four stages. First, we propose a sparse spike correlated topological initialization (SSCTI) method to initialize a sparse network based on node correlations. Second, temporal activation sparsity and structural connection sparsity are integrated via a proposed sparse spike weight initialization (SSWI) method. Third, a hybrid link removal score (LRS) is applied to prune redundant weights and inactive neurons, improving information flow. Finally, the CH3-L3 network automaton framework inspired by Cannistraci-Hebb learning theory is incorporated to perform link prediction for potential synaptic regrowth. These mechanisms enable CH-SNN to achieve sparsification across all linear layers. We have conducted extensive experiments on six datasets including CIFAR-10 and CIFAR-100, evaluating various network architectures such as spiking convolutional neural networks and Spikformer.