🤖 AI Summary
This work addresses the limitation of standard knowledge distillation, which neglects the uncertainty in intermediate layers of the teacher model and thus fails to effectively transfer its reasoning process. To overcome this, the authors propose a symmetric knowledge distillation mechanism that leverages Logit Lens to project both teacher and student hidden states into the vocabulary space. By aligning their reasoning paths through symmetric KL divergence, the method imposes bidirectional penalties that prevent the student from becoming either overconfident or underconfident, thereby preserving high-entropy information pathways. Experiments demonstrate that this approach significantly outperforms conventional knowledge distillation and feature-based transfer baselines on both GPT-2 and Llama architectures, exhibiting superior generalization across diverse instruction-following tasks.
📝 Abstract
Standard Knowledge Distillation (KD) compresses Large Language Models (LLMs) by optimizing final outputs, yet it typically treats the teacher's intermediate layer's thought process as a black box. While feature-based distillation attempts to bridge this gap, existing methods (e.g., MSE and asymmetric KL divergence) ignore the rich uncertainty profiles required for the final output. In this paper, we introduce DistillLens, a framework that symmetrically aligns the evolving thought processes of student and teacher models. By projecting intermediate hidden states into the vocabulary space via the Logit Lens, we enforce structural alignment using a symmetric divergence objective. Our analysis proves that this constraint imposes a dual-sided penalty, preventing both overconfidence and underconfidence while preserving the high-entropy information conduits essential for final deduction. Extensive experiments on GPT-2 and Llama architectures demonstrate that DistillLens consistently outperforms standard KD and feature-transfer baselines on diverse instruction-following benchmarks. The code is available at https://github.com/manishdhakal/DistillLens.