๐ค AI Summary
This work addresses the challenge of knowledge distillation between large language models employing different tokenizers, a scenario where conventional distillation methods suffer significant performance degradation. The study presents the first systematic analysis of cross-model attention mechanisms in dual-space knowledge distillation, identifying mismatched key-query distributions as a critical bottleneck. To mitigate this issue, the authors propose a novel approach based on generative adversarial learning to effectively align representation distributions across disparate vocabularies. The efficacy of the method is validated through token alignment probes and attention heatmap visualizations. Experimental results on text generation tasks demonstrate substantial improvements in cross-vocabulary distillation performance, particularly on out-of-distribution data, where ROUGE-L scores increase by an average of 0.37โsignificantly narrowing the gap with same-vocabulary distillation.
๐ Abstract
Large language models (LLMs) achieve state-of-the-art (SOTA) performance across language tasks, but are costly to deploy due to their size and resource demands. Knowledge Distillation (KD) addresses this by training smaller Student models to mimic larger Teacher models, improving efficiency without significant performance loss. Dual-Space Knowledge Distillation with Cross-Model Attention (DSKD-CMA) has emerged as a SOTA method for KD between LLMs with distinct tokenizers, yet its internal workings remain largely opaque. In this work, we systematically analyse the attention mechanism of DSKD-CMA through manual token alignment probing and heatmap visualisations, revealing both strengths and limitations. Building on this, we introduce a novel method, DSKD-CMA-GA, based on Generative Adversarial (GA) learning, to address the mismatched distributions between the keys and queries computed from distinct models. Experiments show modest but consistent ROUGE-L gains in text generation quality, particularly on out-of-distribution data (+0.37 on average), narrowing the gap between cross- and same-tokenizer KD.