A Self-Ensemble Inspired Approach for Effective Training of Binary-Weight Spiking Neural Networks

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Binary-weight spiking neural networks (BWSNNs) suffer from training difficulties due to the non-differentiability of spiking dynamics and aggressive weight quantization. Method: This paper establishes, for the first time, a theoretical connection between spiking neural networks (SNNs) and binary neural networks (BNNs) under dynamic backpropagation, introducing a novel perspective: *a feedforward SNN is equivalent to a self-ensemble of binary-activated networks with injected noise*. Leveraging this insight, we propose SEI-BWSNN—a unified framework integrating multi-shortcut architectures, surrogate gradient functions, knowledge distillation, and 1-bit weight optimization, while extending low-bit training to FFN layers in Transformers. Contribution/Results: On ImageNet, SEI-BWSNN achieves 82.52% top-1 accuracy within only two timesteps—setting a new state-of-the-art for BWSNNs—while significantly improving both accuracy and energy efficiency. The framework provides a scalable, hardware-friendly training paradigm for ultra-low-power neuromorphic computing.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) are a promising approach to low-power applications on neuromorphic hardware due to their energy efficiency. However, training SNNs is challenging because of the non-differentiable spike generation function. To address this issue, the commonly used approach is to adopt the backpropagation through time framework, while assigning the gradient of the non-differentiable function with some surrogates. Similarly, Binary Neural Networks (BNNs) also face the non-differentiability problem and rely on approximating gradients. However, the deep relationship between these two fields and how their training techniques can benefit each other has not been systematically researched. Furthermore, training binary-weight SNNs is even more difficult. In this work, we present a novel perspective on the dynamics of SNNs and their close connection to BNNs through an analysis of the backpropagation process. We demonstrate that training a feedforward SNN can be viewed as training a self-ensemble of a binary-activation neural network with noise injection. Drawing from this new understanding of SNN dynamics, we introduce the Self-Ensemble Inspired training method for (Binary-Weight) SNNs (SEI-BWSNN), which achieves high-performance results with low latency even for the case of the 1-bit weights. Specifically, we leverage a structure of multiple shortcuts and a knowledge distillation-based training technique to improve the training of (binary-weight) SNNs. Notably, by binarizing FFN layers in a Transformer architecture, our approach achieves 82.52% accuracy on ImageNet with only 2 time steps, indicating the effectiveness of our methodology and the potential of binary-weight SNNs.
Problem

Research questions and friction points this paper is trying to address.

Training Spiking Neural Networks with non-differentiable spike functions
Exploring connections between SNNs and Binary Neural Networks
Improving binary-weight SNN performance with low latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-Ensemble Inspired training for binary-weight SNNs
Multiple shortcuts and knowledge distillation techniques
Binarizing FFN layers in Transformers efficiently
🔎 Similar Papers
No similar papers found.