🤖 AI Summary
This work addresses the limitations of multi-teacher knowledge distillation for small language models, which often suffers from conflicting knowledge and high computational overhead, thereby hindering both performance and training efficiency. To mitigate these issues, we propose a novel paradigm termed “knowledge refinement,” which synthesizes reasoning rationales from multiple large language model teachers into a unified and consistent rationale to alleviate conflicts and enhance distillation effectiveness. We introduce the knowledge refinement mechanism for the first time and design five distinct refinement strategies, among which the routing-based approach demonstrates superior generalization capability. Experimental results show that our method significantly improves student model performance while reducing resource consumption and effectively aligning with the teachers’ reasoning logic, offering a promising pathway toward efficient deployment of lightweight models.
📝 Abstract
Knowledge distillation has emerged as a pivotal technique for transferring knowledge from stronger large language models (LLMs) to smaller, more efficient models. However, traditional distillation approaches face challenges related to knowledge conflicts and high resource demands, particularly when leveraging multiple teacher models. In this paper, we introduce the concept of \textbf{Knowledge Purification}, which consolidates the rationales from multiple teacher LLMs into a single rationale, thereby mitigating conflicts and enhancing efficiency. To investigate the effectiveness of knowledge purification, we further propose five purification methods from various perspectives. Our experiments demonstrate that these methods not only improve the performance of the distilled model but also effectively alleviate knowledge conflicts. Moreover, router-based methods exhibit robust generalization capabilities, underscoring the potential of innovative purification techniques in optimizing multi-teacher distillation and facilitating the practical deployment of powerful yet lightweight models.