🤖 AI Summary
Training spiking neural networks (SNNs) remains challenging due to low efficiency, architectural fragmentation, and reliance on gradient approximations or custom learning rules. Method: This paper proposes Random Spiking Neural Networks (RanSNN), the first SNN framework integrating random network principles—employing sparse random connectivity initialization and partial weight freezing—to drastically reduce trainable parameters while remaining fully compatible with standard backpropagation and conventional artificial neural network (ANN) training methodologies. Contribution/Results: RanSNN preserves biological plausibility of SNNs while significantly enhancing training efficiency and engineering practicality. Experiments across multiple benchmarks demonstrate substantially accelerated training convergence, accuracy comparable to fully trained SNNs, superior generalization and robustness, and improved hardware deployability—without sacrificing computational fidelity or requiring surrogate gradients.
📝 Abstract
The spiking neural network, known as the third generation neural network, is an important network paradigm. Due to its mode of information propagation that follows biological rationality, the spiking neural network has strong energy efficiency and has advantages in complex high-energy application scenarios. However, unlike the artificial neural network (ANN) which has a mature and unified framework, the SNN models and training methods have not yet been widely unified due to the discontinuous and non-differentiable property of the firing mechanism. Although several algorithms for training spiking neural networks have been proposed in the subsequent development process, some fundamental issues remain unsolved. Inspired by random network design, this work proposes a new architecture for spiking neural networks, RanSNN, where only part of the network weights need training and all the classic training methods can be adopted. Compared with traditional training methods for spiking neural networks, it greatly improves the training efficiency while ensuring the training performance, and also has good versatility and stability as validated by benchmark tests.