Skill-Aware Data Selection and Fine-Tuning for Data-Efficient Reasoning Distillation

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost and low efficiency of conventional knowledge distillation for large language models, which typically relies on extensive supervised fine-tuning data. The authors propose a skill-centric distillation framework that introduces, for the first time, a skill decomposition mechanism to precisely guide smaller models in acquiring targeted reasoning capabilities. This approach comprises three key components: skill identification, skill-based data selection, and skill-aware supervised fine-tuning (SFT). Remarkably, using only 1,000 training samples, the method outperforms random SFT baselines by 1.6% and 1.4% on Qwen3-4B and Qwen3-8B, respectively, demonstrating substantially improved data efficiency and skill transfer effectiveness.

Technology Category

Application Category

📝 Abstract
Large reasoning models such as DeepSeek-R1 and their distilled variants achieve strong performance on complex reasoning tasks. Yet, distilling these models often demands large-scale data for supervised fine-tuning (SFT), motivating the pursuit of data-efficient training methods. To address this, we propose a skill-centric distillation framework that efficiently transfers reasoning ability to weaker models with two components: (1) Skill-based data selection, which prioritizes examples targeting the student model's weaker skills, and (2) Skill-aware fine-tuning, which encourages explicit skill decomposition during problem solving. With only 1,000 training examples selected from a 100K teacher-generated corpus, our method surpasses random SFT baselines by +1.6% on Qwen3-4B and +1.4% on Qwen3-8B across five mathematical reasoning benchmarks. Further analysis confirms that these gains concentrate on skills emphasized during training, highlighting the effectiveness of skill-centric training for efficient reasoning distillation.
Problem

Research questions and friction points this paper is trying to address.

reasoning distillation
data efficiency
supervised fine-tuning
skill-aware training
model compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

skill-aware distillation
data-efficient fine-tuning
reasoning skill decomposition
skill-based data selection
supervised fine-tuning
🔎 Similar Papers
No similar papers found.