🤖 AI Summary
Large language models (LLMs) suffer from high inference costs and substantial memory overhead; moreover, conventional knowledge distillation (KD) often leads to teacher misguidance due to noisy and biased student-generated outputs (SGOs), especially in long-sequence generation. Method: This paper proposes SWITCH, the first framework enabling dynamic, selective teacher intervention during sequence generation. It identifies divergence points via token-level probability discrepancies and employs a sequence-length-aware confidence gating mechanism to precisely correct low-confidence segments, integrated within a multi-stage distillation paradigm. Contribution/Results: Extensive experiments across three model families and five instruction-following benchmarks demonstrate that SWITCH significantly enhances long-text generation quality, achieving average improvements of 3.2–5.7 points in BLEU and ROUGE scores—outperforming all existing KD methods.
📝 Abstract
Despite the success of Large Language Models (LLMs), they still face challenges related to high inference costs and memory requirements. To address these issues, Knowledge Distillation (KD) has emerged as a popular method for model compression, with student-generated outputs (SGOs) being particularly notable for reducing the mismatch between training and inference. However, SGOs often produce noisy and biased sequences, which can lead to misguidance from the teacher model, especially in long sequences. To mitigate these challenges, we propose SWITCH (Studying WIth TeaCHer for Knowledge Distillation), a novel approach that strategically incorporates the teacher model during the student's sequence generation. SWITCH identifies discrepancies between the token probabilities of the teacher and student models, allowing the teacher to intervene selectively, particularly in long sequences that are more prone to teacher misguidance. Extensive experimental results across three model families and five instruction-following datasets show that SWITCH surpasses traditional KD methods, particularly excelling in the generation of long sequential data.