Towards Efficient CoT Distillation: Self-Guided Rationale Selector for Better Performance with Fewer Rationales

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing Chain-of-Thought (CoT) distillation methods rely excessively on large-scale rationale datasets while neglecting rationale quality, often transferring erroneous or low-quality reasoning paths to student models. Method: We propose MoRSD, a model-oriented rationale selection framework that introduces the first multi-dimensional Rationale Difficulty metric—incorporating accuracy, diversity, and difficulty—and a student-model feedback-driven self-guided filtering mechanism to dynamically identify high-value reasoning paths. Contribution/Results: Experiments across seven multiple-choice benchmarks demonstrate that MoRSD achieves an average performance gain of 4.6% using only ~30% of rationales—significantly outperforming full-rationale distillation. This work establishes the critical role of “few but high-quality” reasoning samples in knowledge transfer to compact models, offering a novel paradigm for efficient and robust CoT distillation.

Technology Category

Application Category

📝 Abstract
Chain-of-thought (CoT) distillation aims to enhance small language models' (SLMs) reasoning by transferring multi-step reasoning capability from the larger teacher models. However, existing work underestimates rationale quality, focusing primarily on data quantity, which may transfer noisy or incorrect information to the student model. To address the above issues, we proposed extbf{M}odel- extbf{O}riented extbf{R}ationale extbf{S}election extbf{D}istillation (MoRSD), which can discern and select high quality rationales for distillation to improve performance further. We further propose a Rationale Difficulty (RD) metric to measure the ability of the student model to generate the correct answer under a given rationale. Compared to the baseline, we achieved 4.6$%$ average improvement on seven datasets over three tasks, using fewer rationales by controlling their accuracy, diversity, and difficulty. Our results reveal that a small portion of the high quality rationales can enhance the reasoning ability of student models than the entire dataset. Our method promises to be a possible solution for efficient CoT distillation. Our code will be released in https://github.com/Leon221220/MoRSD.
Problem

Research questions and friction points this paper is trying to address.

Improving reasoning in small language models via distillation
Selecting high-quality rationales to avoid noisy information transfer
Enhancing performance with fewer but better rationales
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-Oriented Rationale Selection Distillation for CoT
Rationale Difficulty metric measures student answer accuracy
Selects high-quality rationales based on accuracy diversity difficulty
🔎 Similar Papers
No similar papers found.
J
Jianzhi Yan
Harbin Institute of Technology, Shenzhen, China
Le Liu
Le Liu
Northwestern Polytechnical University
VisualizationComputer GraphicsComputer VisionAI
Y
Youcheng Pan
Pengcheng Laboratory, Shenzhen, China
S
Shiwei Chen
Harbin Institute of Technology, Shenzhen, China
Y
Yang Xiang
Shaoguan Research Institute of Data Industry, China
B
Buzhou Tang
Harbin Institute of Technology, Shenzhen, China