🤖 AI Summary
To address the limited generalization of Spiking Neural Networks (SNNs) in knowledge distillation—caused by conventional KL divergence’s overemphasis on high-probability predictions while neglecting low-probability regions—this paper proposes Head-Tail Aware KL divergence (HTA-KL) distillation. The core innovation is a novel cumulative-probability-based dynamic masking mechanism that adaptively partitions the output distribution into “head” (high-probability) and “tail” (low-probability) regions, jointly optimizing both forward and reverse KL divergences to achieve global alignment of SNN output distributions. This enables a cross-paradigm distillation framework bridging SNNs and Artificial Neural Networks (ANNs). Evaluated on CIFAR-10, CIFAR-100, and Tiny ImageNet, HTA-KL consistently outperforms state-of-the-art methods, achieving higher accuracy with fewer simulation time steps—thereby simultaneously improving energy efficiency and generalization capability.
📝 Abstract
Spiking Neural Networks (SNNs) have emerged as a promising approach for energy-efficient and biologically plausible computation. However, due to limitations in existing training methods and inherent model constraints, SNNs often exhibit a performance gap when compared to Artificial Neural Networks (ANNs). Knowledge distillation (KD) has been explored as a technique to transfer knowledge from ANN teacher models to SNN student models to mitigate this gap. Traditional KD methods typically use Kullback-Leibler (KL) divergence to align output distributions. However, conventional KL-based approaches fail to fully exploit the unique characteristics of SNNs, as they tend to overemphasize high-probability predictions while neglecting low-probability ones, leading to suboptimal generalization. To address this, we propose Head-Tail Aware Kullback-Leibler (HTA-KL) divergence, a novel KD method for SNNs. HTA-KL introduces a cumulative probability-based mask to dynamically distinguish between high- and low-probability regions. It assigns adaptive weights to ensure balanced knowledge transfer, enhancing the overall performance. By integrating forward KL (FKL) and reverse KL (RKL) divergence, our method effectively align both head and tail regions of the distribution. We evaluate our methods on CIFAR-10, CIFAR-100 and Tiny ImageNet datasets. Our method outperforms existing methods on most datasets with fewer timesteps.