Cannistraci-Hebb Training on Ultra-Sparse Spiking Neural Networks

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high energy consumption of spiking neural networks (SNNs) caused by structural redundancy, this paper proposes CH-SNN, a dynamic sparse training framework that achieves, for the first time, full-range ultra-sparsification (>99% sparsity) of linear layers without accuracy degradation. Methodologically, CH-SNN integrates spike-correlation-driven topology and weight initialization, a hybrid pruning scoring mechanism, and the CH³-L³ synaptic regeneration automaton—inspired by Cannistraci–Hebbian learning—to enable synergistic “pruning–regeneration” optimization. Evaluated across six benchmark datasets (including CIFAR-10) and multiple SNN architectures, CH-SNN reduces parameter count by two orders of magnitude relative to dense baselines, while preserving or even improving inference accuracy and significantly enhancing energy efficiency. This work establishes a scalable, structurally sparse paradigm for low-power neuromorphic computing.

Technology Category

Application Category

📝 Abstract
Inspired by the brain's spike-based computation, spiking neural networks (SNNs) inherently possess temporal activation sparsity. However, when it comes to the sparse training of SNNs in the structural connection domain, existing methods fail to achieve ultra-sparse network structures without significant performance loss, thereby hindering progress in energy-efficient neuromorphic computing. This limitation presents a critical challenge: how to achieve high levels of structural connection sparsity while maintaining performance comparable to fully connected networks. To address this challenge, we propose the Cannistraci-Hebb Spiking Neural Network (CH-SNN), a novel and generalizable dynamic sparse training framework for SNNs consisting of four stages. First, we propose a sparse spike correlated topological initialization (SSCTI) method to initialize a sparse network based on node correlations. Second, temporal activation sparsity and structural connection sparsity are integrated via a proposed sparse spike weight initialization (SSWI) method. Third, a hybrid link removal score (LRS) is applied to prune redundant weights and inactive neurons, improving information flow. Finally, the CH3-L3 network automaton framework inspired by Cannistraci-Hebb learning theory is incorporated to perform link prediction for potential synaptic regrowth. These mechanisms enable CH-SNN to achieve sparsification across all linear layers. We have conducted extensive experiments on six datasets including CIFAR-10 and CIFAR-100, evaluating various network architectures such as spiking convolutional neural networks and Spikformer.
Problem

Research questions and friction points this paper is trying to address.

Achieving ultra-sparse SNN structures without performance degradation
Maintaining performance comparable to fully connected networks
Enabling energy-efficient neuromorphic computing through sparsification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Initializes sparse networks using node correlation analysis
Integrates temporal and structural sparsity via weight initialization
Uses hybrid pruning and Cannistraci-Hebb synaptic regrowth
🔎 Similar Papers
No similar papers found.
Y
Yuan Hua
School of Integrated Circuits, Tsinghua University, Beijing China
Jilin Zhang
Jilin Zhang
School of Integrated Circuits, Tsinghua University, Beijing China
Yingtao Zhang
Yingtao Zhang
Professor of Computer Science,Harbin Institute of Technology
Pattern RecognitionMachine LearningComputer VisionImage Processing
W
Wenqi Gu
Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI), Department of Computer Science, Tsinghua University, Beijing, China
L
Leyi You
School of Integrated Circuits, Tsinghua University, Beijing China
B
Baobo Xiong
School of Integrated Circuits, Tsinghua University, Beijing China
C
C. Cannistraci
Center for Complex Network Intelligence (CCNI), Tsinghua Laboratory of Brain and Intelligence (THBI), Department of Computer Science, Department of Biomedical Engineering, Tsinghua University, Beijing, China
H
Hong Chen
School of Integrated Circuits, Tsinghua University, Beijing China