🤖 AI Summary
Existing knowledge distillation methods for enhancing small language models’ reasoning capabilities overly rely on a single large language model (LLM) teacher and neglect systematic error attribution and correction mechanisms. Method: This paper proposes a fault-aware multi-teacher knowledge distillation framework that introduces a peer-review mechanism—where multiple heterogeneous LLMs collaboratively cross-evaluate reasoning paths—and employs dynamic thresholding to select high-quality paths. It further designs an error-attribution-driven instruction data generation strategy that explicitly models error types and corresponding correction logic. Contribution/Results: Compared to single-teacher distillation baselines, our method achieves an average 7.2% accuracy gain across mathematical, commonsense, and logical reasoning tasks, improves error explanation coverage by 3.8×, and is the first to jointly optimize both “learning correct solutions” and “understanding error causes” within a unified distillation paradigm.
📝 Abstract
While reasoning capabilities typically emerge in large language models (LLMs) with tens of billions of parameters, recent research focuses on improving smaller open-source models through knowledge distillation (KD) from commercial LLMs. However, many of these studies rely solely on responses from a single LLM as the gold rationale, unlike the natural human learning process, which involves understanding both the correct answers and the reasons behind mistakes. In this paper, we introduce a novel Fault-Aware DistIllation via Peer-Review (FAIR) approach: 1) Instead of merely obtaining rationales from teachers, our method asks teachers to identify and explain the student's mistakes, providing customized instruction learning data. 2) We design a simulated peer-review process between teacher LLMs, which selects only the generated rationales above the acceptance threshold. This reduces the chance of teachers guessing correctly with flawed rationale, improving instructional data quality. Comprehensive experiments and analysis on mathematical, commonsense, and logical reasoning tasks demonstrate the effectiveness of our method.