Dual-Space Knowledge Distillation with Key-Query Matching for Large Language Models with Vocabulary Mismatch

๐Ÿ“… 2026-03-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of knowledge distillation between large language models employing different tokenizers, a scenario where conventional distillation methods suffer significant performance degradation. The study presents the first systematic analysis of cross-model attention mechanisms in dual-space knowledge distillation, identifying mismatched key-query distributions as a critical bottleneck. To mitigate this issue, the authors propose a novel approach based on generative adversarial learning to effectively align representation distributions across disparate vocabularies. The efficacy of the method is validated through token alignment probes and attention heatmap visualizations. Experimental results on text generation tasks demonstrate substantial improvements in cross-vocabulary distillation performance, particularly on out-of-distribution data, where ROUGE-L scores increase by an average of 0.37โ€”significantly narrowing the gap with same-vocabulary distillation.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) achieve state-of-the-art (SOTA) performance across language tasks, but are costly to deploy due to their size and resource demands. Knowledge Distillation (KD) addresses this by training smaller Student models to mimic larger Teacher models, improving efficiency without significant performance loss. Dual-Space Knowledge Distillation with Cross-Model Attention (DSKD-CMA) has emerged as a SOTA method for KD between LLMs with distinct tokenizers, yet its internal workings remain largely opaque. In this work, we systematically analyse the attention mechanism of DSKD-CMA through manual token alignment probing and heatmap visualisations, revealing both strengths and limitations. Building on this, we introduce a novel method, DSKD-CMA-GA, based on Generative Adversarial (GA) learning, to address the mismatched distributions between the keys and queries computed from distinct models. Experiments show modest but consistent ROUGE-L gains in text generation quality, particularly on out-of-distribution data (+0.37 on average), narrowing the gap between cross- and same-tokenizer KD.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Distillation
Vocabulary Mismatch
Large Language Models
Key-Query Matching
Tokenizers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Distillation
Vocabulary Mismatch
Generative Adversarial Learning
Cross-Model Attention
Large Language Models
๐Ÿ”Ž Similar Papers
No similar papers found.