CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks

📅 2024-02-07
🏛️ International Conference on Machine Learning
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
Spiking neural networks (SNNs) suffer from training difficulties due to the non-differentiability of leaky integrate-and-fire (LIF) neurons, resulting in gradient vanishing across time steps and inferior accuracy compared to artificial neural networks (ANNs). To address this, we propose the complementary LIF (CLIF) neuron—a novel spiking unit featuring a hyperparameter-free dual-path integration-and-firing mechanism. CLIF preserves strictly binary spike outputs while introducing an auxiliary differentiable pathway to mitigate temporal gradient decay. The design is theoretically sound and plug-and-play compatible with existing SNN architectures. Extensive experiments using surrogate gradient learning demonstrate that CLIF consistently outperforms state-of-the-art spiking neuron models across multiple benchmark datasets. Notably, under identical network topologies and training protocols, CLIF-based SNNs achieve accuracy on par with—or slightly exceeding—that of their ANN counterparts. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Compared to conventional deep Artificial Neural Networks (ANNs), SNNs exhibit superior efficiency and capability to process temporal information. However, it remains a challenge to train SNNs due to their undifferentiable spiking mechanism. The surrogate gradients method is commonly used to train SNNs, but often comes with an accuracy disadvantage over ANNs counterpart. We link the degraded accuracy to the vanishing of gradient on the temporal dimension through the analytical and experimental study of the training process of Leaky Integrate-and-Fire (LIF) Neuron-based SNNs. Moreover, we propose the Complementary Leaky Integrate-and-Fire (CLIF) Neuron. CLIF creates extra paths to facilitate the backpropagation in computing temporal gradient while keeping binary output. CLIF is hyperparameter-free and features broad applicability. Extensive experiments on a variety of datasets demonstrate CLIF's clear performance advantage over other neuron models. Furthermore, the CLIF's performance even slightly surpasses superior ANNs with identical network structure and training conditions. The code is available at https://github.com/HuuYuLong/Complementary-LIF.
Problem

Research questions and friction points this paper is trying to address.

Improves training of spiking neural networks
Addresses vanishing gradients in temporal dimension
Enhances accuracy over traditional neuron models
Innovation

Methods, ideas, or system contributions that make the work stand out.

CLIF Neuron enhances temporal gradient backpropagation
CLIF maintains binary output without hyperparameters
CLIF outperforms ANNs in identical conditions
🔎 Similar Papers
No similar papers found.
Yulong Huang
Yulong Huang
Function Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
X
Xiaopeng Lin
Function Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
H
Hongwei Ren
Function Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Y
Yue Zhou
Function Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Z
Zunchang Liu
Function Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Haotian Fu
Haotian Fu
Brown University
B
Biao Pan
School of Integrated Circuit Science and Engineering, Beihang University, Beijing, China
B
Bojun Cheng
Function Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China