AMiD: Knowledge Distillation for LLMs with $α$-mixture Assistant Distribution

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Knowledge distillation for large language models (LLMs) faces challenges including unstable output distribution alignment due to high-dimensional logits, near-zero probability issues, and a lack of systematic methodology for auxiliary distribution design. Method: This paper proposes a unified distillation framework based on an α-mixed auxiliary distribution. It introduces a tunable parameter α to construct a continuous family of mixture distributions, thereby expanding the design space for auxiliary distributions, and systematically generalizes the family of generalized divergences to enable optimization-aware loss function design. The approach yields more robust knowledge transfer paths for autoregressive language models. Contribution/Results: Experiments demonstrate that the method significantly outperforms state-of-the-art distillation approaches across multiple benchmarks, achieving superior model compression efficacy and enhanced training stability.

Technology Category

Application Category

📝 Abstract
Autoregressive large language models (LLMs) have achieved remarkable improvement across many tasks but incur high computational and memory costs. Knowledge distillation (KD) mitigates this issue by transferring knowledge from a large teacher to a smaller student through distributional alignment. Previous studies have proposed various discrepancy metrics, but the capacity gap and training instability caused by near-zero probabilities, stemming from the high-dimensional output of LLMs, remain fundamental limitations. To overcome these challenges, several approaches implicitly or explicitly incorporating assistant distribution have recently been proposed. However, the past proposals of assistant distributions have been a fragmented approach without a systematic investigation of the interpolation path and the divergence. This paper proposes $α$-mixture assistant distribution, a novel generalized family of assistant distributions, and $α$-mixture distillation, coined AMiD, a unified framework for KD using the assistant distribution. The $α$-mixture assistant distribution provides a continuous extension of the assistant distribution by introducing a new distribution design variable $α$, which has been fixed in all previous approaches. Furthermore, AMiD generalizes the family of divergences used with the assistant distributions based on optimality, which has also been restricted in previous works. Through extensive experiments, we demonstrate that AMiD offers superior performance and training stability by leveraging a broader and theoretically grounded assistant distribution space.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational and memory costs in large language models
Overcomes capacity gap and training instability in knowledge distillation
Proposes unified framework for assistant distribution-based knowledge distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces α-mixture assistant distribution for knowledge distillation
Generalizes divergence family based on optimality conditions
Provides continuous extension with tunable α parameter
🔎 Similar Papers
No similar papers found.
D
Donghyeok Shin
Korea Advanced Institute of Science and Technology (KAIST)
Y
Yeongmin Kim
Korea Advanced Institute of Science and Technology (KAIST)
S
Suhyeon Jo
Korea Advanced Institute of Science and Technology (KAIST)
Byeonghu Na
Byeonghu Na
KAIST
Generative ModelDiffusion Model
Il-Chul Moon
Il-Chul Moon
Professor, Department of Industrial and Systems Engineering, KAIST
Modeling and SimulationArtificial Intelligence