π€ AI Summary
This work addresses the limitation of traditional knowledge distillation, which aligns only sample-level prediction probabilities while neglecting inter-class relationships and the geometric structure of prediction distributions. To overcome this, the authors propose a bilateral contrastive knowledge distillation method that introduces a bilateral contrastive loss to jointly optimize two objectives: intra-sample consistency between teacher and student predictions and inter-class orthogonality in the generalized feature space. This dual optimization imposes stronger structural constraints on the studentβs output distribution, thereby enhancing knowledge transfer. The proposed approach consistently outperforms existing distillation techniques across multiple model architectures and benchmark datasets, demonstrating its effectiveness in improving both predictive accuracy and distributional fidelity.
π Abstract
Knowledge distillation (KD) is a machine learning framework that transfers knowledge from a teacher model to a student model. The vanilla KD proposed by Hinton et al. has been the dominant approach in logit-based distillation and demonstrates compelling performance. However, it only performs sample-wise probability alignment between teacher and student's predictions, lacking an mechanism for class-wise comparison. Besides, vanilla KD imposes no structural constraint on the probability space. In this work, we propose a simple yet effective methodology, bilateral contrastive knowledge distillation (BicKD). This approach introduces a novel bilateral contrastive loss, which intensifies the orthogonality among different class generalization spaces while preserving consistency within the same class. The bilateral formulation enables explicit comparison of both sample-wise and class-wise prediction patterns between teacher and student. By emphasizing probabilistic orthogonality, BicKD further regularizes the geometric structure of the predictive distribution. Extensive experiments show that our BicKD method enhances knowledge transfer, and consistently outperforms state-of-the-art knowledge distillation techniques across various model architectures and benchmarks.