A Latency Coding Framework for Deep Spiking Neural Networks with Ultra-Low Latency

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of high inference latency and limited performance in traditional time-to-first-spike (TTFS) spiking neural networks (SNNs), which stem from the lack of efficient training methods. The authors propose a deep TTFS-SNN training framework based on backpropagation through time (BPTT), introducing a novel generalized temporal encoding module that integrates feature extraction while mitigating information loss. By relaxing the single-spike constraint to allow multiple spikes per neuron, the approach alleviates gradient vanishing during training. Furthermore, a temporally adaptive decision (TAD) loss function is designed to overcome incompatibility with conventional cross-entropy loss. The resulting method achieves state-of-the-art accuracy among TTFS-SNNs while maintaining ultra-low inference latency and high energy efficiency, and it significantly enhances robustness against input perturbations.

Technology Category

Application Category

📝 Abstract
Spiking neural networks (SNNs) offer a biologically inspired computing paradigm with significant potential for energy-efficient neural processing. Among neural coding schemes of SNNs, Time-To-First-Spike (TTFS) coding, which encodes information through the precise timing of a neuron's first spike, provides exceptional energy efficiency and biological plausibility. Despite its theoretical advantages, existing TTFS models lack efficient training methods, suffering from high inference latency and limited performance. In this work, we present a comprehensive framework, which enables the efficient training of deep TTFS-coded SNNs by employing backpropagation throuh time (BPTT) algorithm. We name the generalized TTFS coding method in our framework as latency coding. The framework includes: (1) a latency encoding (LE) module with feature extraction and straight-through estimators to address severe information loss in direct intensity-to-latency mapping and ensure smooth gradient flow; (2) relaxation of the strict single-spike constraint of traditional TTFS, allowing neurons of intermediate layers to fire multiple times to mitigating gradient vanishing in deep networks; (3) a temporal adaptive decision (TAD) loss function that dynamically weights supervision signals based on sample-dependent confidence, resolving the incompatibility between latency coding and standard cross-entropy loss. Experimental results demonstrate that our method achieves state-of-the-art accuracy in comparison to existing TTFS-coded SNNs with ultra-low inference latency and superior energy efficiency. The framework also demonstrates improved robustness against input corruptions. Our study investigates the characteristics and potential of latency coding in scenarios demanding rapid response, providing valuable insights for further exploiting the temporal learning capabilities of SNNs.
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
Time-To-First-Spike coding
inference latency
training efficiency
energy efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

latency coding
spiking neural networks
Time-To-First-Spike
backpropagation through time
temporal adaptive decision