Cross Knowledge Distillation between Artificial and Spiking Neural Networks

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited performance of Spiking Neural Networks (SNNs) on event-camera data (e.g., DVS), hindered by scarce labeled samples and immature architectures, this paper proposes a cross-modal, cross-architecture knowledge distillation framework. Our method integrates semantic alignment, a sliding replacement mechanism, and a staged indirect distillation strategy to effectively transfer knowledge from high-performance Artificial Neural Networks (ANNs) trained on RGB images to SNNs operating on event streams. By jointly leveraging RGB supervision signals and DVS event sequences, it mitigates the scarcity of annotated event data. Evaluated on benchmark neuromorphic datasets—N-Caltech101 and CEP-DVS—our approach consistently outperforms existing SNN methods, achieving accuracy improvements of 5.2–8.7%. It is the first work to systematically resolve semantic mismatch and temporal heterogeneity challenges in ANN-to-SNN cross-modal knowledge distillation.

Technology Category

Application Category

📝 Abstract
Recently, Spiking Neural Networks (SNNs) have demonstrated rich potential in computer vision domain due to their high biological plausibility, event-driven characteristic and energy-saving efficiency. Still, limited annotated event-based datasets and immature SNN architectures result in their performance inferior to that of Artificial Neural Networks (ANNs). To enhance the performance of SNNs on their optimal data format, DVS data, we explore using RGB data and well-performing ANNs to implement knowledge distillation. In this case, solving cross-modality and cross-architecture challenges is necessary. In this paper, we propose cross knowledge distillation (CKD), which not only leverages semantic similarity and sliding replacement to mitigate the cross-modality challenge, but also uses an indirect phased knowledge distillation to mitigate the cross-architecture challenge. We validated our method on main-stream neuromorphic datasets, including N-Caltech101 and CEP-DVS. The experimental results show that our method outperforms current State-of-the-Art methods. The code will be available at https://github.com/ShawnYE618/CKD
Problem

Research questions and friction points this paper is trying to address.

Enhancing SNN performance on DVS data using ANN knowledge
Addressing cross-modality challenges in knowledge distillation
Overcoming cross-architecture issues in neural network training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross knowledge distillation between ANNs and SNNs
Semantic similarity and sliding replacement for cross-modality
Indirect phased knowledge distillation for cross-architecture
🔎 Similar Papers
No similar papers found.
S
Shuhan Ye
Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
Y
Yuanbin Qian
Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
C
Chong Wang
Merchants’ Guild Economics and Cultural Intelligent Computing Laboratory, Ningbo University, Ningbo, China; Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
S
Sunqi Lin
Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
Jiazhen Xu
Jiazhen Xu
The Australian National University
J
Jiangbo Qian
Merchants’ Guild Economics and Cultural Intelligent Computing Laboratory, Ningbo University, Ningbo, China; Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China
Y
Yuqi Li
Faculty of Electrical Engineering and Computer Science, Ningbo University, Ningbo, China