🤖 AI Summary
Existing chain-of-thought (CoT) distillation methods underperform on scientific reasoning tasks primarily because large language models often produce erroneous or superficial reasoning traces, yielding low-quality distillation data. To address this, we propose an evolutionary CoT distillation framework: it first generates initial reasoning trajectories via multi-LLM collaborative inference augmented with domain knowledge; then defines a fitness function grounded in correctness and logical coherence; and—novelty introduced—incorporates evolutionary computation into CoT optimization through novelty-based selection, reflective recombination, and mutation operators to iteratively refine reasoning quality. Crucially, our method decouples distillation from teacher-model accuracy, enabling high-fidelity training data extraction even from flawed reasoning. Experiments demonstrate that the evolved CoT dataset substantially enhances small student models’ performance, achieving state-of-the-art results across multiple scientific reasoning benchmarks.
📝 Abstract
While chain-of-thought (CoT) distillation from advanced large language models (LLMs) has proven effective in general reasoning tasks, it struggles in scientific domains where even advanced models often produce incorrect or superficial reasoning due to high complexity and specialized knowledge requirements. Directly distilling from such flawed outputs results in low-quality training data and limits the performance of smaller student models. To overcome this, we propose CoT-Evo, an evolutionary CoT distillation framework. It begins by constructing a diverse pool of reasoning trajectories from multiple LLM thinkers, enriches them with automatically retrieved domain knowledge, and iteratively refines the trajectories using novelty-driven selection, reflective recombination and mutation. The refinement is guided by a fitness function that evaluates answer correctness, coherence, and effective knowledge utilization. This results in a high-quality CoT dataset tailored for scientific reasoning. We employ this evolved dataset to fine-tune a compact model, which achieves state-of-the-art performance on scientific reasoning benchmarks. Our work establishes a scalable approach to synthesizing high-fidelity scientific reasoning data from diverse and fallible LLMs.