KDFlow: A User-Friendly and Efficient Knowledge Distillation Framework for Large Language Models

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of existing knowledge distillation frameworks for large language models, which are constrained by homogeneous training backends between teacher and student models. To overcome this limitation, we propose KDFlow, a novel decoupled distillation framework that separates training and inference backends—employing FSDP2 for training and SGLang for inference—and transmits only teacher hidden states with zero-copy data transfer. This architecture natively supports cross-tokenizer distillation, off-policy learning, and online distillation, while offering a highly extensible API. Experimental results demonstrate that KDFlow achieves 1.44× to 6.36× faster distillation speeds compared to current methods, substantially reducing engineering overhead and accelerating prototyping and deployment of distilled large models.

Technology Category

Application Category

📝 Abstract
Knowledge distillation (KD) is an essential technique to compress large language models (LLMs) into smaller ones. However, despite the distinct roles of the student model and the teacher model in KD, most existing frameworks still use a homogeneous training backend (e.g., FSDP and DeepSpeed) for both models, leading to suboptimal training efficiency. In this paper, we present a novel framework for LLM distillation, termed \textbf{KDFlow}, which features a decoupled architecture and employs SGLang for teacher inference. By bridging the training efficiency of FSDP2 and the inference efficiency of SGLang, KDFlow achieves full utilization of both advantages in a unified system. Moreover, instead of transferring full logits across different processes, our framework only transmits the teacher's hidden states using zero-copy data transfer and recomputes the logits on the student side, effectively balancing the communication cost and KD performance. Furthermore, our framework supports both off-policy and on-policy distillation and incorporates KD algorithms for cross-tokenizer KD through highly extensible and user-friendly APIs. Experiments show that KDFlow can achieve \textbf{1.44$\times$ to 6.36$\times$} speedup compared to current KD frameworks, enabling researchers to rapidly prototype and scale LLM distillation with minimal engineering overhead. Code is available at: https://github.com/songmzhang/KDFlow
Problem

Research questions and friction points this paper is trying to address.

knowledge distillation
large language models
training efficiency
teacher-student framework
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

knowledge distillation
decoupled architecture
zero-copy transfer
LLM compression
SGLang
🔎 Similar Papers
No similar papers found.