Learning from Dense Events: Towards Fast Spiking Neural Networks Training via Event Dataset Distillatio

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high training cost of Spiking Neural Networks (SNNs) in event-based vision—stemming from temporal encoding—we propose PACE, the first knowledge distillation framework tailored for event data. Our method introduces three key innovations: (1) the ST-DSM module, enabling fine-grained spatiotemporal feature densification with phase alignment; (2) PEQ-N, a plug-and-play probabilistic integer quantizer supporting event-frame-compatible low-bit weight representation; and (3) residual membrane potential–driven SDR enhancement combined with synthetic sample optimization. On N-MNIST, PACE achieves 84.4% accuracy using only 15% of the full dataset—reaching 85% of the full-data performance—while accelerating training by 50× and reducing model storage by 6000×. This marks the first demonstration of minute-scale, highly efficient SNN training for event data.

Technology Category

Application Category

📝 Abstract
Event cameras sense brightness changes and output binary asynchronous event streams, attracting increasing attention. Their bio-inspired dynamics align well with spiking neural networks (SNNs), offering a promising energy-efficient alternative to conventional vision systems. However, SNNs remain costly to train due to temporal coding, which limits their practical deployment. To alleviate the high training cost of SNNs, we introduce extbf{PACE} (Phase-Aligned Condensation for Events), the first dataset distillation framework to SNNs and event-based vision. PACE distills a large training dataset into a compact synthetic one that enables fast SNN training, which is achieved by two core modules: extbf{ST-DSM} and extbf{PEQ-N}. ST-DSM uses residual membrane potentials to densify spike-based features (SDR) and to perform fine-grained spatiotemporal matching of amplitude and phase (ST-SM), while PEQ-N provides a plug-and-play straight through probabilistic integer quantizer compatible with standard event-frame pipelines. Across DVS-Gesture, CIFAR10-DVS, and N-MNIST datasets, PACE outperforms existing coreset selection and dataset distillation baselines, with particularly strong gains on dynamic event streams and at low or moderate IPC. Specifically, on N-MNIST, it achieves (84.4%) accuracy, about (85%) of the full training set performance, while reducing training time by more than (50 imes) and storage cost by (6000 imes), yielding compact surrogates that enable minute-scale SNN training and efficient edge deployment.
Problem

Research questions and friction points this paper is trying to address.

Reduces high training costs of spiking neural networks
Compresses large event datasets into compact synthetic versions
Enables fast SNN training for efficient edge deployment
Innovation

Methods, ideas, or system contributions that make the work stand out.

PACE framework distills large datasets for fast SNN training
ST-DSM module densifies spikes and matches spatiotemporal features
PEQ-N provides plug-and-play quantizer for event-frame pipelines
🔎 Similar Papers
No similar papers found.
S
Shuhan Ye
Nanyang Technological University
Y
Yi Yu
Nanyang Technological University
Q
Qixin Zhang
Nanyang Technological University
C
Chenqi Kong
Nanyang Technological University
Qiangqiang Wu
Qiangqiang Wu
Postdoc, City University of Hong Kong, Princeton University
Computer VisionSelf-Supervised Temporal Representation LearningHealthcare AI
K
Kun Wang
Nanyang Technological University
X
Xudong Jiang
Nanyang Technological University