Training Deep Normalization-Free Spiking Neural Networks with Lateral Inhibition

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the biological implausibility introduced by explicit normalization (e.g., batch normalization) in training deep spiking neural networks (SNNs), this work proposes an end-to-end training framework that eliminates all explicit normalization layers. Methodologically, it introduces a Dale’s law–compliant excitatory-inhibitory (E-I) neuron circuit incorporating biologically inspired lateral inhibition, where subtraction and divisive normalization jointly regulate neuronal activity and gain. Furthermore, it proposes dynamic E-I initialization (E-I Init) and decoupled backpropagation (E-I Prop) to ensure stable gradient propagation. Evaluated across multiple datasets and architectures, the method achieves performance on par with normalized baselines while preserving strict biological plausibility—marking the first demonstration of high biological interpretability coexisting with state-of-the-art accuracy in deep SNNs.

Technology Category

Application Category

📝 Abstract
Spiking neural networks (SNNs) have garnered significant attention as a central paradigm in neuromorphic computing, owing to their energy efficiency and biological plausibility. However, training deep SNNs has critically depended on explicit normalization schemes, such as batch normalization, leading to a trade-off between performance and biological realism. To resolve this conflict, we propose a normalization-free learning framework that incorporates lateral inhibition inspired by cortical circuits. Our framework replaces the traditional feedforward SNN layer with a circuit of distinct excitatory (E) and inhibitory (I) neurons that complies with Dale's law. The circuit dynamically regulates neuronal activity through subtractive and divisive inhibition, which respectively control the activity and the gain of excitatory neurons. To enable and stabilize end-to-end training of the biologically constrained SNN, we propose two key techniques: E-I Init and E-I Prop. E-I Init is a dynamic parameter initialization scheme that balances excitatory and inhibitory inputs while performing gain control. E-I Prop decouples the backpropagation of the E-I circuits from the forward propagation and regulates gradient flow. Experiments across several datasets and network architectures demonstrate that our framework enables stable training of deep SNNs with biological realism and achieves competitive performance without resorting to explicit normalizations. Therefore, our work not only provides a solution to training deep SNNs but also serves a computational platform for further exploring the functions of lateral inhibition in large-scale cortical computation.
Problem

Research questions and friction points this paper is trying to address.

Training deep SNNs without explicit normalization schemes
Incorporating lateral inhibition for biological realism in SNNs
Enabling stable end-to-end training of biologically constrained SNNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lateral inhibition replaces normalization in SNN training
E-I Init balances excitation and inhibition dynamically
E-I Prop decouples backpropagation for stable gradients
🔎 Similar Papers
No similar papers found.