🤖 AI Summary
This work addresses the limitations of existing knowledge distillation methods when integrating multiple heterogeneous strategies, which often suffer from implementation complexity, rigid combinations, and catastrophic forgetting. To overcome these challenges, the authors propose a Sequential Multi-Stage Knowledge Distillation (SMSKD) framework that applies distinct distillation techniques in successive stages. Each stage leverages a frozen reference model from the previous stage to anchor learned knowledge, while a sample-level adaptive loss weighting mechanism—based on the teacher’s true class probability (TCP)—dynamically balances knowledge retention and integration. The framework flexibly accommodates arbitrary distillation strategies and numbers of stages, consistently yielding significant accuracy improvements for student models across diverse teacher–student architectures, outperforming current baselines with negligible computational overhead.
📝 Abstract
Knowledge distillation (KD) transfers knowledge from large teacher models to compact student models, enabling efficient deployment on resource constrained devices. While diverse KD methods, including response based, feature based, and relation based approaches, capture different aspects of teacher knowledge, integrating multiple methods or knowledge sources is promising but often hampered by complex implementation, inflexible combinations, and catastrophic forgetting, which limits practical effectiveness. This work proposes SMSKD (Sequential Multi Stage Knowledge Distillation), a flexible framework that sequentially integrates heterogeneous KD methods. At each stage, the student is trained with a specific distillation method, while a frozen reference model from the previous stage anchors learned knowledge to mitigate forgetting. In addition, we introduce an adaptive weighting mechanism based on the teacher true class probability (TCP) that dynamically adjusts the reference loss per sample to balance knowledge retention and integration. By design, SMSKD supports arbitrary method combinations and stage counts with negligible computational overhead. Extensive experiments show that SMSKD consistently improves student accuracy across diverse teacher student architectures and method combinations, outperforming existing baselines. Ablation studies confirm that stage wise distillation and reference model supervision are primary contributors to performance gains, with TCP based adaptive weighting providing complementary benefits. Overall, SMSKD is a practical and resource efficient solution for integrating heterogeneous KD methods.