Spiking Neural Networks with Random Network Architecture

📅 2025-05-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training spiking neural networks (SNNs) remains challenging due to low efficiency, architectural fragmentation, and reliance on gradient approximations or custom learning rules. Method: This paper proposes Random Spiking Neural Networks (RanSNN), the first SNN framework integrating random network principles—employing sparse random connectivity initialization and partial weight freezing—to drastically reduce trainable parameters while remaining fully compatible with standard backpropagation and conventional artificial neural network (ANN) training methodologies. Contribution/Results: RanSNN preserves biological plausibility of SNNs while significantly enhancing training efficiency and engineering practicality. Experiments across multiple benchmarks demonstrate substantially accelerated training convergence, accuracy comparable to fully trained SNNs, superior generalization and robustness, and improved hardware deployability—without sacrificing computational fidelity or requiring surrogate gradients.

Technology Category

Application Category

📝 Abstract
The spiking neural network, known as the third generation neural network, is an important network paradigm. Due to its mode of information propagation that follows biological rationality, the spiking neural network has strong energy efficiency and has advantages in complex high-energy application scenarios. However, unlike the artificial neural network (ANN) which has a mature and unified framework, the SNN models and training methods have not yet been widely unified due to the discontinuous and non-differentiable property of the firing mechanism. Although several algorithms for training spiking neural networks have been proposed in the subsequent development process, some fundamental issues remain unsolved. Inspired by random network design, this work proposes a new architecture for spiking neural networks, RanSNN, where only part of the network weights need training and all the classic training methods can be adopted. Compared with traditional training methods for spiking neural networks, it greatly improves the training efficiency while ensuring the training performance, and also has good versatility and stability as validated by benchmark tests.
Problem

Research questions and friction points this paper is trying to address.

SNNs lack unified models due to non-differentiable firing mechanisms
Existing SNN training methods have unresolved fundamental issues
Proposing RanSNN to improve training efficiency and versatility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random network architecture for Spiking Neural Networks
Partial weight training with classic methods
Improved efficiency and maintained performance
🔎 Similar Papers
No similar papers found.
Z
Zihan Dai
School of mathematical and science, Soochow University, Suzhou City, Jiangsu Province, PR China, 215006
Huanfei Ma
Huanfei Ma
Soochow University
Nonlinear ScienceSystems BiologyApplied Math