On the Generalization vs Fidelity Paradox in Knowledge Distillation

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the trade-off between zero-shot reasoning generalization and inference fidelity in knowledge distillation (KD) across language models ranging from 0.5B to 7B parameters. Methodologically, it conducts large-scale empirical and statistical analysis on 14 complex reasoning tasks, employing cross-scale distillation, logit smoothing, and teacher-signal ablation to establish a rigorous zero-shot evaluation benchmark. Key contributions include: (1) demonstrating that smaller student models gain substantial average accuracy improvements (+10% overall, up to +22% on individual tasks), whereas larger students exhibit marginal gains (≈1.3%); (2) identifying task-specific teacher expertise—not global teacher performance—as a critical, previously overlooked driver of KD efficacy; and (3) revealing, for the first time, a “performance–fidelity paradox”: while KD consistently improves output accuracy, it frequently degrades student fidelity to the teacher’s reasoning process. These findings challenge conventional assumptions about KD scalability and highlight the need for fidelity-aware distillation strategies.

Technology Category

Application Category

📝 Abstract
Knowledge distillation (KD) is a key technique for compressing large language models into smaller ones while preserving performance. Despite the recent traction of KD research, its effectiveness for smaller language models (LMs) and the mechanisms driving knowledge transfer remain underexplored. In this work, we present the first large-scale empirical and statistical analysis of KD across models ranging from 0.5B to 7B parameters on 14 complex reasoning tasks in a zero-shot setting. Our findings reveal that KD can improve the average performance of smaller models by up to $10%$, with a peak task specific gain of $22%$, while providing only marginal benefits ($sim 1.3%$) for larger models. Surprisingly, teacher performance has a minimal impact on student outcomes, while teacher task expertise impacts KD effectiveness. A correlation study indicates that smaller LMs benefit more from KD, whereas larger LMs show diminished gains. Additionally, we uncover a misalignment between improvements in student performance and reasoning fidelity, suggesting that while KD enhances accuracy, it does not always maintain the structured decision-making processes of the teacher. Our ablation study further highlights the importance of teacher signals and logit smoothing in influencing students' performance after distillation. Overall, our study offers a comprehensive empirical and statistical assessment of KD, highlighting both its benefits and trade-offs when distilling knowledge from larger to smaller LMs.
Problem

Research questions and friction points this paper is trying to address.

Explores knowledge distillation effectiveness for small language models
Analyzes misalignment between performance gains and reasoning fidelity
Investigates impact of teacher expertise on distillation outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large-scale empirical analysis of knowledge distillation
Teacher task expertise impacts distillation effectiveness
Logit smoothing enhances student model performance
🔎 Similar Papers
No similar papers found.
S
Suhas Kamasetty Ramesh
Department of Electrical Engineering, Indian Institute of Technology Delhi, India
Ayan Sengupta
Ayan Sengupta
Indian Institute of Technology Delhi
Natural Language ProcessingMeta LearningReinforcement Learning
T
Tanmoy Chakraborty
Department of Electrical Engineering, Indian Institute of Technology Delhi, India