Balance Divergence for Knowledge Distillation

📅 2025-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge distillation methods—such as Kullback–Leibler (KL) divergence—often neglect low-probability “dark knowledge” in teacher models, leading to imbalanced information transfer and limited generalization, especially in vision tasks. To address this, we propose Balanced Divergence Distillation (BDD), the first framework to incorporate reverse KL divergence as a complementary mechanism, jointly modeling both positive and negative knowledge. BDD further introduces a dynamically adjusted temperature-scaling strategy to balance these dual knowledge sources. The method integrates joint KL and reverse KL optimization, adaptive temperature scaling, and a lightweight student-network adaptation framework. On CIFAR-100 and ImageNet, BDD improves accuracy of lightweight student models by 1–3%. On Cityscapes, it boosts the mIoU of PSP-ResNet18 by 4.55%, significantly enhancing fine-grained semantic knowledge transfer.

Technology Category

Application Category

📝 Abstract
Knowledge distillation has been widely adopted in computer vision task processing, since it can effectively enhance the performance of lightweight student networks by leveraging the knowledge transferred from cumbersome teacher networks. Most existing knowledge distillation methods utilize Kullback-Leibler divergence to mimic the logit output probabilities between the teacher network and the student network. Nonetheless, these methods may neglect the negative parts of the teacher's ''dark knowledge'' because the divergence calculations may ignore the effect of the minute probabilities from the teacher's logit output. This deficiency may lead to suboptimal performance in logit mimicry during the distillation process and result in an imbalance of information acquired by the student network. In this paper, we investigate the impact of this imbalance and propose a novel method, named Balance Divergence Distillation. By introducing a compensatory operation using reverse Kullback-Leibler divergence, our method can improve the modeling of the extremely small values in the negative from the teacher and preserve the learning capacity for the positive. Furthermore, we test the impact of different temperature coefficients adjustments, which may conducted to further balance for knowledge transferring. We evaluate the proposed method on several computer vision tasks, including image classification and semantic segmentation. The evaluation results show that our method achieves an accuracy improvement of 1%~3% for lightweight students on both CIFAR-100 and ImageNet dataset, and a 4.55% improvement in mIoU for PSP-ResNet18 on the Cityscapes dataset. The experiments show that our method is a simple yet highly effective solution that can be smoothly applied to different knowledge distillation methods.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Distillation
Information Loss
Computer Vision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Balanced Divergence Distillation
Uniform Knowledge Transfer
Computer Vision Tasks
🔎 Similar Papers
No similar papers found.
Y
Yafei Qi
School of Computer Science and Engineering, Central South University, Changsha 410073, Hunan, China
C
Chen Wang
Institute of Artificial Intelligence, Shaoxing University, Shaoxing 312000, Zhejiang, China
Zhaoning Zhang
Zhaoning Zhang
National University of Defense Technology
MLSysCompute VisionDistributed Computing
Y
Yaping Liu
Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, Guangdong, China
Yongmin Zhang
Yongmin Zhang
Central South University
IoTsMobile ComputingBig Data