🤖 AI Summary
This study addresses the challenge of efficiently selecting high-quality reasoning trajectories from strong teacher models to enhance the general reasoning capabilities of small student models. We propose a sample selection strategy that jointly measures task difficulty adaptivity and reasoning path diversity, yielding the high-value reasoning dataset NaturalThoughts. Compared to random sampling and existing datasets (e.g., OpenThoughts, LIMO), our approach significantly improves data utilization efficiency and knowledge transfer efficacy. We systematically evaluate the method via supervised fine-tuning and reasoning trajectory distillation on mainstream open-weight models (e.g., Llama, Qwen), with ablation studies confirming the effectiveness of each design component. Our student models achieve new state-of-the-art performance on rigorous STEM benchmarks—including GPQA-Diamond, MMLU-Pro, and SuperGPQA—demonstrating substantial gains in reasoning accuracy and robustness.
📝 Abstract
Recent work has shown that distilling reasoning traces from a larger teacher model via supervised finetuning outperforms reinforcement learning with the smaller student model alone (Guo et al. 2025). However, there has not been a systematic study of what kind of reasoning demonstrations from the teacher are most effective in improving the student model's reasoning capabilities. In this work we curate high-quality "NaturalThoughts" by selecting reasoning traces from a strong teacher model based on a large pool of questions from NaturalReasoning (Yuan et al. 2025). We first conduct a systematic analysis of factors that affect distilling reasoning capabilities, in terms of sample efficiency and scalability for general reasoning tasks. We observe that simply scaling up data size with random sampling is a strong baseline with steady performance gains. Further, we find that selecting difficult examples that require more diverse reasoning strategies is more sample-efficient to transfer the teacher model's reasoning skills. Evaluated on both Llama and Qwen models, training with NaturalThoughts outperforms existing reasoning datasets such as OpenThoughts, LIMO, etc. on general STEM reasoning benchmarks including GPQA-Diamond, MMLU-Pro and SuperGPQA.