🤖 AI Summary
How to efficiently model finite temporal memory in spiking neural networks (SNNs)? This work proposes a general state-space modeling paradigm that explicitly encodes input history via learnable auxiliary state variables, enabling neurons to dynamically access temporally localized information within a finite time window—without increasing event-driven computational overhead. The framework is orthogonal to standard neuron models (e.g., leaky integrate-and-fire) and supports end-to-end learning of both delay duration and parameters. Evaluated on the Spiking Heidelberg Digits (SHD) benchmark, our method achieves state-of-the-art performance among delay-aware SNNs. Notably, it yields substantial accuracy gains for compact network architectures, demonstrating its effectiveness and generalizability in enhancing temporal modeling capability while preserving computational efficiency.
📝 Abstract
Spiking neural networks (SNNs) are biologically inspired, event-driven models that are suitable for processing temporal data and offer energy-efficient computation when implemented on neuromorphic hardware. In SNNs, richer neuronal dynamic allows capturing more complex temporal dependencies, with delays playing a crucial role by allowing past inputs to directly influence present spiking behavior. We propose a general framework for incorporating delays into SNNs through additional state variables. The proposed mechanism enables each neuron to access a finite temporal input history. The framework is agnostic to neuron models and hence can be seamlessly integrated into standard spiking neuron models such as LIF and adLIF. We analyze how the duration of the delays and the learnable parameters associated with them affect the performance. We investigate the trade-offs in the network architecture due to additional state variables introduced by the delay mechanism. Experiments on the Spiking Heidelberg Digits (SHD) dataset show that the proposed mechanism matches the performance of existing delay-based SNNs while remaining computationally efficient. Moreover, the results illustrate that the incorporation of delays may substantially improve performance in smaller networks.