Distribution-Aligned Sequence Distillation for Superior Long-CoT Reasoning

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing sequence-level knowledge distillation methods suffer from insufficient modeling of the teacher’s output distribution, leading to limited generalization in student models due to three key issues: inadequate distribution representation, capacity mismatch between teacher and student, and exposure bias between training and inference. To address these challenges, this work proposes the Distribution-Aligned Sequence Distillation (DASD) framework, which introduces an explicit distribution alignment mechanism into sequence distillation for the first time. DASD jointly optimizes teacher distribution modeling and student learning capacity, effectively mitigating the aforementioned limitations. Combined with lightweight open-source large language model training, high-quality synthetic data filtering, and an improved supervised fine-tuning strategy, the resulting DASD-4B-Thinking model—trained on only 448,000 samples—achieves state-of-the-art performance among comparable open-source models on benchmarks covering mathematical reasoning, scientific reasoning, and code generation, even surpassing larger-scale counterparts.

Technology Category

Application Category

📝 Abstract
In this report, we introduce DASD-4B-Thinking, a lightweight yet highly capable, fully open-source reasoning model. It achieves SOTA performance among open-source models of comparable scale across challenging benchmarks in mathematics, scientific reasoning, and code generation -- even outperforming several larger models. We begin by critically reexamining a widely adopted distillation paradigm in the community: SFT on teacher-generated responses, also known as sequence-level distillation. Although a series of recent works following this scheme have demonstrated remarkable efficiency and strong empirical performance, they are primarily grounded in the SFT perspective. Consequently, these approaches focus predominantly on designing heuristic rules for SFT data filtering, while largely overlooking the core principle of distillation itself -- enabling the student model to learn the teacher's full output distribution so as to inherit its generalization capability. Specifically, we identify three critical limitations in current practice: i) Inadequate representation of the teacher's sequence-level distribution; ii) Misalignment between the teacher's output distribution and the student's learning capacity; and iii) Exposure bias arising from teacher-forced training versus autoregressive inference. In summary, these shortcomings reflect a systemic absence of explicit teacher-student interaction throughout the distillation process, leaving the essence of distillation underexploited. To address these issues, we propose several methodological innovations that collectively form an enhanced sequence-level distillation training pipeline. Remarkably, DASD-4B-Thinking obtains competitive results using only 448K training samples -- an order of magnitude fewer than those employed by most existing open-source efforts. To support community research, we publicly release our models and the training dataset.
Problem

Research questions and friction points this paper is trying to address.

sequence-level distillation
output distribution
teacher-student alignment
exposure bias
model distillation
Innovation

Methods, ideas, or system contributions that make the work stand out.

sequence-level distillation
distribution alignment
exposure bias
teacher-student interaction
open-source reasoning model
🔎 Similar Papers
No similar papers found.