Dendritic Resonate-and-Fire Neuron for Effective and Efficient Long Sequence Modeling

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited memory capacity of Resonate-and-Fire (RF) neurons and the trade-off between energy efficiency and training speed in long-sequence modeling, this paper proposes the Dendritic RF neuron model. It introduces multiple dendritic branches for band-selective encoding and incorporates a soma threshold mechanism adaptively modulated by historical spiking activity, enabling sparse spike communication and oscillation-driven frequency-selective representation. By integrating multi-channel frequency decomposition, intrinsic oscillatory dynamics, and adaptive firing policies, the model preserves spiking neural network (SNN) training efficiency while substantially suppressing redundant spikes. Experiments demonstrate state-of-the-art accuracy on long-sequence tasks, significant improvements in spike sparsity, reduced computational overhead, and strong potential for deployment on edge devices.

Technology Category

Application Category

📝 Abstract
The explosive growth in sequence length has intensified the demand for effective and efficient long sequence modeling. Benefiting from intrinsic oscillatory membrane dynamics, Resonate-and-Fire (RF) neurons can efficiently extract frequency components from input signals and encode them into spatiotemporal spike trains, making them well-suited for long sequence modeling. However, RF neurons exhibit limited effective memory capacity and a trade-off between energy efficiency and training speed on complex temporal tasks. Inspired by the dendritic structure of biological neurons, we propose a Dendritic Resonate-and-Fire (D-RF) model, which explicitly incorporates a multi-dendritic and soma architecture. Each dendritic branch encodes specific frequency bands by utilizing the intrinsic oscillatory dynamics of RF neurons, thereby collectively achieving comprehensive frequency representation. Furthermore, we introduce an adaptive threshold mechanism into the soma structure that adjusts the threshold based on historical spiking activity, reducing redundant spikes while maintaining training efficiency in long sequence tasks. Extensive experiments demonstrate that our method maintains competitive accuracy while substantially ensuring sparse spikes without compromising computational efficiency during training. These results underscore its potential as an effective and efficient solution for long sequence modeling on edge platforms.
Problem

Research questions and friction points this paper is trying to address.

Enhancing memory capacity and efficiency in long sequence modeling
Overcoming energy efficiency versus training speed trade-offs
Reducing redundant spikes while maintaining computational efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dendritic architecture for comprehensive frequency representation
Adaptive threshold mechanism to reduce redundant spikes
Combines energy efficiency with competitive training accuracy
🔎 Similar Papers
No similar papers found.
Dehao Zhang
Dehao Zhang
University of Electronic Science and Technology of China
Spiking Neural Network
M
Malu Zhang
University of Electronic Science and Technology of China
S
Shuai Wang
University of Electronic Science and Technology of China
Jingya Wang
Jingya Wang
Assistant Professor, ShanghaiTech University
Computer VisionEmbodied AIHuman-Object Interaction
Wenjie Wei
Wenjie Wei
University of Electronic Science and Technology of China
Spiking Neural NetworkNeuromorphic ComputingModel CompressionEvent-based Vision
Z
Zeyu Ma
University of Electronic Science and Technology of China
G
Guoqing Wang
University of Electronic Science and Technology of China
Y
Yang Yang
University of Electronic Science and Technology of China
H
HaiZhou Li
The Chinese University of Hong Kong (Shenzhen)