Enhancing Cross-Tokenizer Knowledge Distillation with Contextual Dynamical Mapping

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the sequence misalignment and vocabulary mismatch problems arising from tokenizer heterogeneity in cross-tokenizer knowledge distillation. We propose the Context-aware Dynamic Mapping (CDM) framework, which introduces, for the first time, a context-driven dynamic vocabulary mapping mechanism that jointly optimizes sequence-level alignment and vocabulary-level probability calibration. Furthermore, CDM innovatively integrates synergistic training strategies combining both same- and cross-tokenizer distillation. We conduct systematic evaluation across five major model families—LLaMA-3, Phi-3, Gemma-2, OPT, and Qwen-2—using diverse heterogeneous teacher-student pairs. Experiments span instruction following, code generation, and mathematical reasoning tasks. Results demonstrate that CDM consistently outperforms existing cross-tokenizer distillation methods, delivering robust and stable performance gains across all evaluated configurations.

Technology Category

Application Category

📝 Abstract
Knowledge Distillation (KD) has emerged as a prominent technique for model compression. However, conventional KD approaches primarily focus on homogeneous architectures with identical tokenizers, constraining their applicability in cross-architecture scenarios. As for the cross-tokenizer KD, the differences in the tokenizers give rise to two fundamental challenges: (1) sequence misalignment caused by divergent tokenization strategies, and (2) mismatched vocabulary size and composition. While existing probability-matching methods attempt to address these issues, their efficacy remains limited due to suboptimal alignment in both the sequence and vocabulary aspects. To overcome these limitations, we propose Contextual Dynamic Mapping (CDM), a novel cross-tokenizer distillation framework that employs contextual information to enhance sequence alignment precision and dynamically improves vocabulary mapping. We evaluated the effectiveness of our approach across five advanced and widely-used model families (i.e, LLama3, Phi3, Gemma2, OPT and Qwen2), which were configured into three distinct teacher-student pairs. Our method shows significant advantages over existing cross-tokenizer distillation baselines across diverse benchmarks, including instruction-following, code generation and math. Notably, our analysis reveals that combining conventional same-tokenizer distillation and cross-tokenizer distillation through CDM yields further performance improvements. The code is available at https://github.com/pppa2019/ContexualDynamicMapping
Problem

Research questions and friction points this paper is trying to address.

Cross-tokenizer knowledge distillation challenges
Sequence misalignment in tokenization strategies
Mismatched vocabulary size and composition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contextual Dynamic Mapping
Enhances sequence alignment
Improves vocabulary mapping
🔎 Similar Papers
No similar papers found.