Exploring Knowledge Purification in Multi-Teacher Knowledge Distillation for LLMs

📅 2026-02-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of multi-teacher knowledge distillation for small language models, which often suffers from conflicting knowledge and high computational overhead, thereby hindering both performance and training efficiency. To mitigate these issues, we propose a novel paradigm termed “knowledge refinement,” which synthesizes reasoning rationales from multiple large language model teachers into a unified and consistent rationale to alleviate conflicts and enhance distillation effectiveness. We introduce the knowledge refinement mechanism for the first time and design five distinct refinement strategies, among which the routing-based approach demonstrates superior generalization capability. Experimental results show that our method significantly improves student model performance while reducing resource consumption and effectively aligning with the teachers’ reasoning logic, offering a promising pathway toward efficient deployment of lightweight models.

Technology Category

Application Category

📝 Abstract
Knowledge distillation has emerged as a pivotal technique for transferring knowledge from stronger large language models (LLMs) to smaller, more efficient models. However, traditional distillation approaches face challenges related to knowledge conflicts and high resource demands, particularly when leveraging multiple teacher models. In this paper, we introduce the concept of \textbf{Knowledge Purification}, which consolidates the rationales from multiple teacher LLMs into a single rationale, thereby mitigating conflicts and enhancing efficiency. To investigate the effectiveness of knowledge purification, we further propose five purification methods from various perspectives. Our experiments demonstrate that these methods not only improve the performance of the distilled model but also effectively alleviate knowledge conflicts. Moreover, router-based methods exhibit robust generalization capabilities, underscoring the potential of innovative purification techniques in optimizing multi-teacher distillation and facilitating the practical deployment of powerful yet lightweight models.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Distillation
Large Language Models
Knowledge Conflicts
Multi-Teacher
Model Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Purification
Multi-Teacher Knowledge Distillation
Large Language Models
Rationale Consolidation
Router-based Distillation
🔎 Similar Papers
No similar papers found.
R
Ruihan Jin
Department of Automation, Tsinghua University
P
Pengpeng Shao
Department of Automation, Tsinghua University
Zhengqi Wen
Zhengqi Wen
Tshinghua University
LLM
J
Jinyang Wu
Department of Automation, Tsinghua University
M
Mingkuan Feng
Department of Automation, Tsinghua University
S
Shuo Yang
Department of Automation, Tsinghua University
C
Chu Yuan Zhang
Department of Automation, Tsinghua University
J
Jianhua Tao
Department of Automation, Tsinghua University; Beijing National Research Center for Information Science and Technology