Optimization of Low-Latency Spiking Neural Networks Utilizing Historical Dynamics of Refractory Periods

📅 2025-06-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional refractory mechanisms in low-latency spiking neural networks (SNNs) fail under short simulation time steps, leading to neuronal over-activation and poor noise robustness. Method: We propose a history-aware dynamic refractory period model that estimates the initial refractory duration from membrane potential derivatives and historical refractory states, and introduces a threshold-dependent refractory kernel function for adaptive regulation—while preserving the binary spiking nature of SNNs. Contribution/Results: The method effectively suppresses redundant spikes and enhances state update stability. Experiments demonstrate state-of-the-art (SOTA) accuracy on both static and neuromorphic datasets, and significantly outperform conventional SNNs and artificial neural networks (ANNs) in noise robustness.

Technology Category

Application Category

📝 Abstract
The refractory period controls neuron spike firing rate, crucial for network stability and noise resistance. With advancements in spiking neural network (SNN) training methods, low-latency SNN applications have expanded. In low-latency SNNs, shorter simulation steps render traditional refractory mechanisms, which rely on empirical distributions or spike firing rates, less effective. However, omitting the refractory period amplifies the risk of neuron over-activation and reduces the system's robustness to noise. To address this challenge, we propose a historical dynamic refractory period (HDRP) model that leverages membrane potential derivative with historical refractory periods to estimate an initial refractory period and dynamically adjust its duration. Additionally, we propose a threshold-dependent refractory kernel to mitigate excessive neuron state accumulation. Our approach retains the binary characteristics of SNNs while enhancing both noise resistance and overall performance. Experimental results show that HDRP-SNN significantly reduces redundant spikes compared to traditional SNNs, and achieves state-of-the-art (SOTA) accuracy both on static datasets and neuromorphic datasets. Moreover, HDRP-SNN outperforms artificial neural networks (ANNs) and traditional SNNs in noise resistance, highlighting the crucial role of the HDRP mechanism in enhancing the performance of low-latency SNNs.
Problem

Research questions and friction points this paper is trying to address.

Optimizing refractory periods in low-latency spiking neural networks
Reducing neuron over-activation and enhancing noise resistance
Improving accuracy and performance in SNNs with dynamic refractory adjustment
Innovation

Methods, ideas, or system contributions that make the work stand out.

HDRP model uses membrane potential derivatives
Dynamic refractory period adjustment enhances stability
Threshold-dependent kernel reduces neuron state accumulation
🔎 Similar Papers
L
Liying Tao
Institute of Microelectronics of the Chinese Academy of Sciences, University of Chinese Academy of Sciences
Zonglin Yang
Zonglin Yang
Ph.D. in Computer Science, Nanyang Technological University
Natural Language ProcessingLLMs for Scientific DiscoveryLarge Reasoning Models
D
Delong Shang
Institute of Microelectronics of the Chinese Academy of Sciences, Nanjing Institute of Intelligent Technology