🤖 AI Summary
Spiking neural networks (SNNs) suffer from high training complexity—O(T)—due to sequential simulation over T timesteps, severely limiting efficiency for long temporal sequences. To address this, we propose Fixed-Point Parallel Training (FPT), a novel training paradigm that reduces complexity to O(K) (K ≈ 3) without altering network architecture. Our key innovation is the first reformulation of the Leaky Integrate-and-Fire (LIF) neuron model as a parallelizable fixed-point iteration, enabling full-timestep parallelization. We provide theoretical convergence guarantees and unify existing parallel SNN training approaches under this framework. Experiments demonstrate that FPT achieves significant training acceleration while preserving exact LIF dynamics—particularly beneficial for long-sequence tasks—and exhibits strong scalability and practical applicability.
📝 Abstract
Spiking Neural Networks (SNNs) often suffer from high time complexity $O(T)$ due to the sequential processing of $T$ spikes, making training computationally expensive. In this paper, we propose a novel Fixed-point Parallel Training (FPT) method to accelerate SNN training without modifying the network architecture or introducing additional assumptions. FPT reduces the time complexity to $O(K)$, where $K$ is a small constant (usually $K=3$), by using a fixed-point iteration form of Leaky Integrate-and-Fire (LIF) neurons for all $T$ timesteps. We provide a theoretical convergence analysis of FPT and demonstrate that existing parallel spiking neurons can be viewed as special cases of our proposed method. Experimental results show that FPT effectively simulates the dynamics of original LIF neurons, significantly reducing computational time without sacrificing accuracy. This makes FPT a scalable and efficient solution for real-world applications, particularly for long-term tasks. Our code will be released at href{https://github.com/WanjinVon/FPT}{ exttt{https://github.com/WanjinVon/FPT}}.