Self-Attentive Spatio-Temporal Calibration for Precise Intermediate Layer Matching in ANN-to-SNN Distillation

📅 2025-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance limitations in ANN-to-SNN knowledge distillation—specifically, semantic mismatch across intermediate layers and lack of spatiotemporal consistency—this paper proposes a self-attention-driven spatiotemporal calibration mechanism. It is the first to introduce self-attention for cross-network, cross-temporal semantic alignment, automatically identifying optimal layer pairs and jointly calibrating spatial structure and temporal dynamics. The method integrates intermediate-layer knowledge distillation with spike-based training optimization. Experiments demonstrate state-of-the-art accuracy: SNNs surpass their ANN counterparts on CIFAR-10 (95.12%) and CIFAR-100 (79.40%) using only two time steps; achieve 68.69% top-1 accuracy on ImageNet with four steps; and attain 97.92% and 83.60% accuracy on DVS-Gesture and DVS-CIFAR10, respectively. Critically, these gains are achieved under significantly improved energy efficiency, establishing new benchmarks for high-performance, low-energy neuromorphic computing.

Technology Category

Application Category

📝 Abstract
Spiking Neural Networks (SNNs) are promising for low-power computation due to their event-driven mechanism but often suffer from lower accuracy compared to Artificial Neural Networks (ANNs). ANN-to-SNN knowledge distillation can improve SNN performance, but previous methods either focus solely on label information, missing valuable intermediate layer features, or use a layer-wise approach that neglects spatial and temporal semantic inconsistencies, leading to performance degradation.To address these limitations, we propose a novel method called self-attentive spatio-temporal calibration (SASTC). SASTC uses self-attention to identify semantically aligned layer pairs between ANN and SNN, both spatially and temporally. This enables the autonomous transfer of relevant semantic information. Extensive experiments show that SASTC outperforms existing methods, effectively solving the mismatching problem. Superior accuracy results include 95.12% on CIFAR-10, 79.40% on CIFAR-100 with 2 time steps, and 68.69% on ImageNet with 4 time steps for static datasets, and 97.92% on DVS-Gesture and 83.60% on DVS-CIFAR10 for neuromorphic datasets. This marks the first time SNNs have outperformed ANNs on both CIFAR-10 and CIFAR-100, shedding the new light on the potential applications of SNNs.
Problem

Research questions and friction points this paper is trying to address.

Spiking Neural Networks
Accuracy Improvement
Knowledge Distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

SASTC
Knowledge Distillation
ANN-SNN Calibration
🔎 Similar Papers
No similar papers found.
D
Di Hong
The College of Computer Science and Technology, Zhejiang University, China; Nanhu Brain-computer Interface Institute, Hangzhou, China; The State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, China
Yueming Wang
Yueming Wang
Zhejiang University
Brain-computer InterfacePattern recognitionmachine learningneural signal processing