🤖 AI Summary
In conventional knowledge distillation, converting logits to probabilities via softmax discards discriminative information, causing gradient conflicts—and even performance degradation—when jointly optimizing logit-level and probability-level losses. This work is the first to theoretically attribute such conflicts to inconsistent gradient directions in the classifier head, grounded in the neural collapse theory. We propose a decoupled dual-head architecture: a bilinear head dedicated to logit-level supervision and a standard linear head for probability-level supervision, thereby preventing head collapse. By isolating these two supervision signals, our framework enables complementary optimization. Extensive experiments across multiple benchmarks demonstrate consistent and significant improvements over state-of-the-art distillation methods, validating both the efficacy of leveraging deeper logit representations and the structural modeling value of explicit decoupling.
📝 Abstract
Traditional knowledge distillation focuses on aligning the student's predicted probabilities with both ground-truth labels and the teacher's predicted probabilities. However, the transition to predicted probabilities from logits would obscure certain indispensable information. To address this issue, it is intuitive to additionally introduce a logit-level loss function as a supplement to the widely used probability-level loss function, for exploiting the latent information of logits. Unfortunately, we empirically find that the amalgamation of the newly introduced logit-level loss and the previous probability-level loss will lead to performance degeneration, even trailing behind the performance of employing either loss in isolation. We attribute this phenomenon to the collapse of the classification head, which is verified by our theoretical analysis based on the neural collapse theory. Specifically, the gradients of the two loss functions exhibit contradictions in the linear classifier yet display no such conflict within the backbone. Drawing from the theoretical analysis, we propose a novel method called dual-head knowledge distillation, which partitions the linear classifier into two classification heads responsible for different losses, thereby preserving the beneficial effects of both losses on the backbone while eliminating adverse influences on the classification head. Extensive experiments validate that our method can effectively exploit the information inside the logits and achieve superior performance against state-of-the-art counterparts.