Discretized Quadratic Integrate-and-Fire Neuron Model for Deep Spiking Neural Networks

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Leaky Integrate-and-Fire (LIF) neurons suffer from limited representational capacity due to linear membrane potential decay, while Quadratic Integrate-and-Fire (QIF) neurons—though biologically more expressive—exhibit training instability in deep spiking neural networks (SNNs). Method: We propose the first discretized QIF neuron model tailored for high-performance deep SNNs. By rigorously deriving an analytical formula for a parameterized surrogate gradient window, we substantially mitigate gradient mismatch, ensuring training stability and scalability. Our approach integrates biologically inspired nonlinear dynamics, timestep-parameterized analysis, and backpropagation-based training. Contribution/Results: Evaluated on CIFAR-10/100, ImageNet, and CIFAR-10 DVS, our method consistently outperforms LIF baselines in both accuracy and dynamic representation capability. It establishes, for the first time, the feasibility and effectiveness of stable, trainable QIF neurons in large-scale SNNs.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) have emerged as energy-efficient alternatives to traditional artificial neural networks, leveraging asynchronous and biologically inspired neuron dynamics. Among existing neuron models, the Leaky Integrate-and-Fire (LIF) neuron has become widely adopted in deep SNNs due to its simplicity and computational efficiency. However, this efficiency comes at the expense of expressiveness, as LIF dynamics are constrained to linear decay at each timestep. In contrast, more complex models, such as the Quadratic Integrate-and-Fire (QIF) neuron, exhibit richer, nonlinear dynamics but have seen limited adoption due to their training instability. On that note, we propose the first discretization of the QIF neuron model tailored for high-performance deep spiking neural networks and provide an in-depth analysis of its dynamics. To ensure training stability, we derive an analytical formulation for surrogate gradient windows directly from our discretizations' parameter set, minimizing gradient mismatch. We evaluate our method on CIFAR-10, CIFAR-100, ImageNet, and CIFAR-10 DVS, demonstrating its ability to outperform state-of-the-art LIF-based methods. These results establish our discretization of the QIF neuron as a compelling alternative to LIF neurons for deep SNNs, combining richer dynamics with practical scalability.
Problem

Research questions and friction points this paper is trying to address.

Overcoming LIF neurons' limited expressiveness in deep SNNs
Addressing training instability in complex neuron models like QIF
Enabling high-performance deep SNNs with richer nonlinear dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Discretized Quadratic Integrate-and-Fire neuron model
Analytical surrogate gradient windows for stability
Outperforms LIF-based methods on multiple datasets
🔎 Similar Papers
No similar papers found.
E
Eric Jahns
STAM Center, Arizona State University, Tempe, Arizona
D
Davi Moreno
Center for Advanced Studies and Systems of Recife
M
Milan Stojkov
Faculty of Technical Sciences, University of Novi Sad, Novi Sad, Serbia
Michel A. Kinsy
Michel A. Kinsy
Associate Professor, Arizona State University
Microelectronics SecurityHardware SecuritySecure Computer ArchitectureAdaptive ComputingCryptosystems