DistillLens: Symmetric Knowledge Distillation Through Logit Lens

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitation of standard knowledge distillation, which neglects the uncertainty in intermediate layers of the teacher model and thus fails to effectively transfer its reasoning process. To overcome this, the authors propose a symmetric knowledge distillation mechanism that leverages Logit Lens to project both teacher and student hidden states into the vocabulary space. By aligning their reasoning paths through symmetric KL divergence, the method imposes bidirectional penalties that prevent the student from becoming either overconfident or underconfident, thereby preserving high-entropy information pathways. Experiments demonstrate that this approach significantly outperforms conventional knowledge distillation and feature-based transfer baselines on both GPT-2 and Llama architectures, exhibiting superior generalization across diverse instruction-following tasks.

Technology Category

Application Category

📝 Abstract
Standard Knowledge Distillation (KD) compresses Large Language Models (LLMs) by optimizing final outputs, yet it typically treats the teacher's intermediate layer's thought process as a black box. While feature-based distillation attempts to bridge this gap, existing methods (e.g., MSE and asymmetric KL divergence) ignore the rich uncertainty profiles required for the final output. In this paper, we introduce DistillLens, a framework that symmetrically aligns the evolving thought processes of student and teacher models. By projecting intermediate hidden states into the vocabulary space via the Logit Lens, we enforce structural alignment using a symmetric divergence objective. Our analysis proves that this constraint imposes a dual-sided penalty, preventing both overconfidence and underconfidence while preserving the high-entropy information conduits essential for final deduction. Extensive experiments on GPT-2 and Llama architectures demonstrate that DistillLens consistently outperforms standard KD and feature-transfer baselines on diverse instruction-following benchmarks. The code is available at https://github.com/manishdhakal/DistillLens.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Distillation
Large Language Models
Intermediate Representations
Uncertainty Modeling
Symmetric Alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Symmetric Knowledge Distillation
Logit Lens
Intermediate Alignment
High-Entropy Information
Structural Alignment
🔎 Similar Papers
No similar papers found.
M
Manish Dhakal
Georgia State University, GA, USA
U
Uthman Jinadu
Georgia State University, GA, USA
A
Anjila Budathoki
Georgia State University, GA, USA
Rajshekhar Sunderraman
Rajshekhar Sunderraman
Georgia State University
Databases
Y
Yi Ding
Auburn University, AL, USA