SpikingGamma: Surrogate-Gradient Free and Temporally Precise Online Training of Spiking Neural Networks with Smoothed Delays

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Training spiking neural networks (SNNs) at high temporal resolution faces challenges such as inaccurate surrogate gradients, limited temporal modeling capacity, and difficulties in hardware deployment. This work proposes SpikingGamma, a novel SNN architecture that integrates a differentiable delay mechanism and an internal recurrent memory structure with Sigma-Delta spiking encoding to enable online error backpropagation without surrogate gradients. The method accurately learns fine-grained temporal patterns, achieving state-of-the-art performance across multiple benchmark tasks while maintaining extremely sparse spiking activity and robustness to temporal resolution. Furthermore, its design is inherently compatible with neuromorphic hardware, significantly enhancing training stability and deployment efficiency.

Technology Category

Application Category

📝 Abstract
Neuromorphic hardware implementations of Spiking Neural Networks (SNNs) promise energy-efficient, low-latency AI through sparse, event-driven computation. Yet, training SNNs under fine temporal discretization remains a major challenge, hindering both low-latency responsiveness and the mapping of software-trained SNNs to efficient hardware. In current approaches, spiking neurons are modeled as self-recurrent units, embedded into recurrent networks to maintain state over time, and trained with BPTT or RTRL variants based on surrogate gradients. These methods scale poorly with temporal resolution, while online approximations often exhibit instability for long sequences and tend to fail at capturing temporal patterns precisely. To address these limitations, we develop spiking neurons with internal recursive memory structures that we combine with sigma-delta spike-coding. We show that this SpikingGamma model supports direct error backpropagation without surrogate gradients, can learn fine temporal patterns with minimal spiking in an online manner, and scale feedforward SNNs to complex tasks and benchmarks with competitive accuracy, all while being insensitive to the temporal resolution of the model. Our approach offers both an alternative to current recurrent SNNs trained with surrogate gradients, and a direct route for mapping SNNs to neuromorphic hardware.
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
temporal precision
online training
surrogate gradients
neuromorphic hardware
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpikingGamma
surrogate-gradient free
temporally precise
online training
sigma-delta coding
🔎 Similar Papers
No similar papers found.
R
Roel Koopman
Machine Learning Group, CWI
Sebastian Otte
Sebastian Otte
Institute for Robotics and Cognitive Systems
Artificial IntelligenceMachine LearningNeural Networks
S
Sander Bohté
Machine Learning Group, CWI