DS-ATGO: Dual-Stage Synergistic Learning via Forward Adaptive Threshold and Backward Gradient Optimization for Spiking Neural Networks

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Direct training of Spiking Neural Networks (SNNs) suffers from membrane potential distribution drift across timesteps, causing threshold misalignment, imbalanced spiking activity, and severe gradient attenuation—particularly in deep layers. Method: We propose a two-stage cooperative learning framework: (1) a forward pass employing an adaptive firing threshold dynamically calibrated to the evolving membrane potential distribution, enabling spatiotemporally aligned spike generation; and (2) a backward pass with gradient dynamic optimization, wherein surrogate gradients are spatiotemporally scaled during backpropagation to mitigate deep-layer gradient vanishing. Contribution/Results: Our method introduces no additional trainable parameters, significantly enhancing training stability and convergence speed. It achieves state-of-the-art (SOTA) accuracy on multiple benchmark datasets. Moreover, it balances spike rates across timesteps and improves gradient coverage in deep layers by 32.7%, effectively reconciling accuracy, computational efficiency, and scalability.

Technology Category

Application Category

📝 Abstract
Brain-inspired spiking neural networks (SNNs) are recognized as a promising avenue for achieving efficient, low-energy neuromorphic computing. Direct training of SNNs typically relies on surrogate gradient (SG) learning to estimate derivatives of non-differentiable spiking activity. However, during training, the distribution of neuronal membrane potentials varies across timesteps and progressively deviates toward both sides of the firing threshold. When the firing threshold and SG remain fixed, this may lead to imbalanced spike firing and diminished gradient signals, preventing SNNs from performing well. To address these issues, we propose a novel dual-stage synergistic learning algorithm that achieves forward adaptive thresholding and backward dynamic SG. In forward propagation, we adaptively adjust thresholds based on the distribution of membrane potential dynamics (MPD) at each timestep, which enriches neuronal diversity and effectively balances firing rates across timesteps and layers. In backward propagation, drawing from the underlying association between MPD, threshold, and SG, we dynamically optimize SG to enhance gradient estimation through spatio-temporal alignment, effectively mitigating gradient information loss. Experimental results demonstrate that our method achieves significant performance improvements. Moreover, it allows neurons to fire stable proportions of spikes at each timestep and increases the proportion of neurons that obtain gradients in deeper layers.
Problem

Research questions and friction points this paper is trying to address.

SNNs suffer from imbalanced spike firing during training
Fixed firing thresholds cause diminished gradient signals in SNNs
Neuronal membrane potential distribution deviates across timesteps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Forward adaptive thresholding balances firing rates
Backward dynamic surrogate gradient enhances estimation
Dual-stage synergistic learning aligns spatio-temporal gradients
🔎 Similar Papers
No similar papers found.
J
Jiaqiang Jiang
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023; Zhejiang Key Laboratory of Visual Information Intelligent Processing, Hangzhou 310023
W
Wenfeng Xu
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023; Zhejiang Key Laboratory of Visual Information Intelligent Processing, Hangzhou 310023
Jing Fan
Jing Fan
Research Assistant, Vanderbilt University
Human Robot InteractionBrain Computer InterfaceArtificial IntelligenceMachine LearningVirtual Reality
R
Rui Yan
College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023; Zhejiang Key Laboratory of Visual Information Intelligent Processing, Hangzhou 310023