UNDO: Understanding Distillation as Optimization

πŸ“… 2025-04-03
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Standard one-step knowledge distillation suffers from performance limitations due to misalignment between teacher explanations and student learning needs. Method: This paper proposes an iterative knowledge distillation framework that reformulates distillation as a teacher-student co-optimization process. It dynamically identifies student errors to guide the teacher in generating targeted, enhanced reasoning justifications; integrates iterative prompt refinement, error-driven teacher re-prompting, reasoning path alignment, and evaluation-guided distillation cycles to progressively refine teacher explanations. The method generalizes across student models without requiring teacher retraining. Results: On mathematical and commonsense reasoning benchmarks, our approach achieves up to a 20% absolute improvement over standard single-step distillation. Moreover, the distilled high-quality explanations demonstrate strong transferability across diverse student architectures, validating the framework’s generality and practical utility.

Technology Category

Application Category

πŸ“ Abstract
Knowledge distillation has emerged as an effective strategy for compressing large language models' (LLMs) knowledge into smaller, more efficient student models. However, standard one-shot distillation methods often produce suboptimal results due to a mismatch between teacher-generated rationales and the student's specific learning requirements. In this paper, we introduce the UNDO: UNderstanding Distillation as Optimization framework, designed to bridge this gap by iteratively identifying the student's errors and prompting the teacher to refine its explanations accordingly. Each iteration directly targets the student's learning deficiencies, motivating the teacher to provide tailored and enhanced rationales that specifically address these weaknesses. Empirical evaluations on various challenging mathematical and commonsense reasoning tasks demonstrate that our iterative distillation method, UNDO, significantly outperforms standard one-step distillation methods, achieving performance gains of up to 20%. Additionally, we show that teacher-generated data refined through our iterative process remains effective even when applied to different student models, underscoring the broad applicability of our approach. Our work fundamentally reframes knowledge distillation as an iterative teacher-student interaction, effectively leveraging dynamic refinement by the teacher for better knowledge distillation.
Problem

Research questions and friction points this paper is trying to address.

Optimizing knowledge distillation for better student model performance
Addressing mismatch between teacher rationales and student learning needs
Iterative refinement of teacher explanations to target student weaknesses
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative distillation targeting student errors
Dynamic teacher refinement of explanations
Broad applicability across student models
πŸ”Ž Similar Papers
No similar papers found.
K
Kushal Jain
UC San Diego
P
Piyushi Goyal
ETH Zurich
Kumar Shridhar
Kumar Shridhar
ETH Zurich
NLPDeep LearningMachine Learning