BSO: Binary Spiking Online Optimization Algorithm

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high memory overhead in training Binary Spiking Neural Networks (BSNNs) caused by latent weight storage and temporal unrolling, this paper proposes BSO, an efficient online training algorithm, and its temporal-aware variant T-BSO. BSO eliminates latent weight storage entirely, triggering binary weight flips solely via gradient momentum; T-BSO further introduces a temporal gradient aggregation mechanism to dynamically adapt neuronal thresholds. Both algorithms are theoretically guaranteed to converge. Experiments on image classification and speech recognition demonstrate that BSO and T-BSO reduce training memory consumption by up to 72%, accelerate convergence by up to 1.8×, and surpass state-of-the-art BSNN training methods in accuracy. The core contributions are: (i) the first latent-weight-free online binary optimization framework for BSNNs, and (ii) a novel temporal-sensitive adaptive threshold modulation paradigm.

Technology Category

Application Category

📝 Abstract
Binary Spiking Neural Networks (BSNNs) offer promising efficiency advantages for resource-constrained computing. However, their training algorithms often require substantial memory overhead due to latent weights storage and temporal processing requirements. To address this issue, we propose Binary Spiking Online (BSO) optimization algorithm, a novel online training algorithm that significantly reduces training memory. BSO directly updates weights through flip signals under the online training framework. These signals are triggered when the product of gradient momentum and weights exceeds a threshold, eliminating the need for latent weights during training. To enhance performance, we propose T-BSO, a temporal-aware variant that leverages the inherent temporal dynamics of BSNNs by capturing gradient information across time steps for adaptive threshold adjustment. Theoretical analysis establishes convergence guarantees for both BSO and T-BSO, with formal regret bounds characterizing their convergence rates. Extensive experiments demonstrate that both BSO and T-BSO achieve superior optimization performance compared to existing training methods for BSNNs. The codes are available at https://github.com/hamings1/BSO.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory overhead in binary spiking neural network training
Eliminating latent weights storage through online flip signals
Improving optimization performance with temporal-aware threshold adjustment
Innovation

Methods, ideas, or system contributions that make the work stand out.

BSO algorithm updates weights via flip signals
T-BSO captures temporal gradient for threshold adjustment
Eliminates latent weights storage during training process
Y
Yu Liang
University of Electronic Science and Technology of China
Y
Yu Yang
University of Electronic Science and Technology of China
Wenjie Wei
Wenjie Wei
University of Electronic Science and Technology of China
Spiking Neural NetworkNeuromorphic ComputingModel CompressionEvent-based Vision
Ammar Belatreche
Ammar Belatreche
Associate Professor in Computer Science, Faculty of Engineering and Environment
Computational intelligencebio-inspired computing and optimisationmachine learningdata miningcomputational neuroscience
S
Shuai Wang
University of Electronic Science and Technology of China
M
Malu Zhang
University of Electronic Science and Technology of China
Y
Yang Yang
University of Electronic Science and Technology of China