🤖 AI Summary
In knowledge distillation, teacher and student models conventionally require a shared tokenizer, severely limiting cross-architecture and cross-tokenizer knowledge transfer from large to small models—thereby increasing deployment costs and hardware constraints. To address this, we propose the Universal Logit Distillation loss (ULD), grounded in optimal transport theory, which enables tokenizer-agnostic, logit-level knowledge alignment across heterogeneous architectures and tokenizers for the first time. ULD implicitly learns tokenizer mapping by modeling semantic distribution matching between disparate vocabularies. Extensive experiments on multiple heterogeneous model pairs—including Llama–Qwen and Phi–Bloom—demonstrate that ULD consistently outperforms conventional logit distillation by over 15% in average performance. Moreover, it significantly enhances the student model’s generalization capability and deployment flexibility, eliminating the need for tokenizer co-design or vocabulary alignment preprocessing.
📝 Abstract
Deploying large language models (LLMs) of several billion parameters can be impractical in most industrial use cases due to constraints such as cost, latency limitations, and hardware accessibility. Knowledge distillation (KD) offers a solution by compressing knowledge from resource-intensive large models to smaller ones. Various strategies exist, some relying on the text generated by the teacher model and optionally utilizing his logits to enhance learning. However, these methods based on logits often require both teacher and student models to share the same tokenizer, limiting their applicability across different LLM families. In this paper, we introduce Universal Logit Distillation (ULD) loss, grounded in optimal transport, to address this limitation. Our experimental results demonstrate the effectiveness of ULD loss in enabling distillation across models with different architectures and tokenizers, paving the way to a more widespread use of distillation techniques.