Learning Neuron Dynamics within Deep Spiking Neural Networks

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep spiking neural networks (SNNs) predominantly employ simplified neuron models—such as the leaky integrate-and-fire (LIF) model—that inadequately capture complex temporal dynamics; meanwhile, expressive alternatives suffer from manual hyperparameter tuning and poor scalability. To address this, we propose the Learnable Neuron Model (LNM), which parameterizes the nonlinear integrate-and-fire dynamics via low-order polynomials, enabling end-to-end, data-driven learning of neuron behavior without hand-crafted hyperparameters. LNM is natively compatible with backpropagation and mainstream deep learning frameworks. Equipped with LNM, deep SNNs achieve stable and efficient training. Extensive experiments on static-image benchmarks (CIFAR-10/100, ImageNet) and neuromorphic event-based datasets (CIFAR-10 DVS) demonstrate state-of-the-art performance, significantly enhancing SNNs’ capacity to model temporal structure and improve generalization.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) offer a promising energy-efficient alternative to Artificial Neural Networks (ANNs) by utilizing sparse and asynchronous processing through discrete spike-based computation. However, the performance of deep SNNs remains limited by their reliance on simple neuron models, such as the Leaky Integrate-and-Fire (LIF) model, which cannot capture rich temporal dynamics. While more expressive neuron models exist, they require careful manual tuning of hyperparameters and are difficult to scale effectively. This difficulty is evident in the lack of successful implementations of complex neuron models in high-performance deep SNNs. In this work, we address this limitation by introducing Learnable Neuron Models (LNMs). LNMs are a general, parametric formulation for non-linear integrate-and-fire dynamics that learn neuron dynamics during training. By learning neuron dynamics directly from data, LNMs enhance the performance of deep SNNs. We instantiate LNMs using low-degree polynomial parameterizations, enabling efficient and stable training. We demonstrate state-of-the-art performance in a variety of datasets, including CIFAR-10, CIFAR-100, ImageNet, and CIFAR-10 DVS. LNMs offer a promising path toward more scalable and high-performing spiking architectures.
Problem

Research questions and friction points this paper is trying to address.

Deep SNNs underperform due to simplistic neuron models
Complex neuron models require difficult manual hyperparameter tuning
Lack of scalable complex neuron implementations in deep SNNs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learnable Neuron Models learn dynamics from data
Parametric formulation enables stable training process
Polynomial parameterizations achieve state-of-the-art performance
🔎 Similar Papers
No similar papers found.
E
Eric Jahns
STAM Center, Arizona State University, Tempe, Arizona
D
Davi Moreno
Center for Advanced Studies and Systems of Recife
Michel A. Kinsy
Michel A. Kinsy
Associate Professor, Arizona State University
Microelectronics SecurityHardware SecuritySecure Computer ArchitectureAdaptive ComputingCryptosystems