π€ AI Summary
This work addresses the limitation of conventional knowledge distillation caused by mismatched feature distributions between teacher and student models. To mitigate this issue, the authors propose DSKD, a novel approach that integrates a lightweight diffusion model to perform denoising sampling on student features under the guidance of the teacherβs classifier. A self-distillation mechanism is then established between the original and denoised student features to enhance representation learning. Furthermore, locality-sensitive hashing (LSH) is employed to enable efficient feature alignment, effectively alleviating mapping discrepancies. Extensive experiments demonstrate that DSKD consistently outperforms existing distillation methods across multiple vision recognition tasks and model architectures, achieving superior performance and strong generalization capability.
π Abstract
Existing Knowledge Distillation (KD) methods often align feature information between teacher and student by exploring meaningful feature processing and loss functions. However, due to the difference in feature distributions between the teacher and student, the student model may learn incompatible information from the teacher. To address this problem, we propose teacher-guided student Diffusion Self-KD, dubbed as DSKD. Instead of the direct teacher-student alignment, we leverage the teacher classifier to guide the sampling process of denoising student features through a light-weight diffusion model. We then propose a novel locality-sensitive hashing (LSH)-guided feature distillation method between the original and denoised student features. The denoised student features encapsulate teacher knowledge and could be regarded as a teacher role. In this way, our DSKD method could eliminate discrepancies in mapping manners and feature distributions between the teacher and student, while learning meaningful knowledge from the teacher. Experiments on visual recognition tasks demonstrate that DSKD significantly outperforms existing KD methods across various models and datasets. Our code is attached in supplementary material.