🤖 AI Summary
This work addresses the problem of selecting optimal reasoning paths from multi-teacher model outputs to enhance the reasoning capabilities of small-scale student large language models (e.g., 32B). To overcome the limitations of global naturalness—where holistic fluency fails to reflect step-wise reasoning quality—the authors propose a local-naturalness-based teacher response selection method. Specifically, within a sliding window over reasoning steps, the student model scores each step’s conditional probability, yielding a fine-grained local likelihood metric for path evaluation and high-quality data filtering. This selection is integrated with multi-teacher response aggregation and supervised fine-tuning (SFT) for reasoning distillation. Experiments on mathematical reasoning benchmarks demonstrate a +9.4 percentage point accuracy gain for the student model, substantially outperforming both single-best-teacher training and global selection strategies. To our knowledge, this is the first approach enabling locally discriminable and optimizable reasoning paths in multi-teacher settings.
📝 Abstract
Distilling long reasoning traces (10K+ tokens) from stronger teacher models into smaller student LLMs via SFT has emerged as a standard paradigm. This approach is practical and efficient: it leverages the ease of generating abundant reasoning data from stronger models and provides a direct, data-driven way to teach less capable models better reasoning. While previous work has largely focused on prompt selection with responses from a single teacher, the equally important problem of choosing the best response when multiple teacher outputs are available for a single prompt remains underexplored. This challenge becomes important in a multi-teacher setting, where different students may benefit from the outputs of different teachers. This paper fills that gap with a systematic study of response selection for reasoning distillation. We first show that the current method, which picks responses the student assigns the highest global log-probability (global naturalness), fails when responses come from multiple teachers, i.e., global naturalness no longer correlates with downstream performance, especially as the reasoning traces from strong teachers become longer. To overcome this problem, we introduce Local Naturalness, which measures the student's log-probabilities over short, sequential reasoning steps conditioned only on a small local window. Local Naturalness enables two applications: 1) Teacher Selection: Aggregating local scores across prompts reliably identifies the most helpful teacher. 2) Response Selection from a Multiple Teachers: When mixing answers from many teachers, Local Naturalness boosts a 32B student's accuracy on math benchmarks by 9.4pp over global selection, also surpassing the performance achieved by training on data from the single best teacher. These results highlight the power of localized data quality evaluation and data mixing for more effective reasoning distillation.