Distilling the Essence: Efficient Reasoning Distillation via Sequence Truncation

📅 2025-12-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Distilling reasoning capabilities from large language models (LLMs) into student models incurs prohibitive computational overhead due to long sequences—comprising prompts, chain-of-thought (CoT) rationales, and answers. Method: This paper proposes an efficient reasoning distillation paradigm that supervises only the CoT segment. Grounded in the empirical finding that early CoT tokens encode the most critical reasoning knowledge, we introduce a quantifiable sequence truncation protocol: retaining only the latter 50% of CoT tokens (i.e., truncating the first 50%) during training. Contribution/Results: We are the first to empirically reveal the knowledge density of early CoT tokens and establish a lightweight, computation-quality-controllable distillation framework. On mathematical reasoning benchmarks, our method retains 94% of full-sequence performance while reducing training time, memory footprint, and FLOPs by approximately 50%, significantly enhancing the efficiency and practicality of reasoning distillation.

Technology Category

Application Category

📝 Abstract
Distilling the reasoning capabilities from a large language model (LLM) to a smaller student model often involves training on substantial amounts of reasoning data. However, distillation over lengthy sequences with prompt (P), chain-of-thought (CoT), and answer (A) segments makes the process computationally expensive. In this work, we investigate how the allocation of supervision across different segments (P, CoT, A) affects student performance. Our analysis shows that selective knowledge distillation over only the CoT tokens can be effective when the prompt and answer information is encompassed by it. Building on this insight, we establish a truncation protocol to quantify computation-quality tradeoffs as a function of sequence length. We observe that training on only the first $50%$ of tokens of every training sequence can retain, on average, $approx94%$ of full-sequence performance on math benchmarks while reducing training time, memory usage, and FLOPs by about $50%$ each. These findings suggest that reasoning distillation benefits from prioritizing early reasoning tokens and provides a simple lever for computation-quality tradeoffs. Codes are available at https://github.com/weiruichen01/distilling-the-essence.
Problem

Research questions and friction points this paper is trying to address.

Efficiently distills reasoning from large to small models
Reduces computational cost of distillation on lengthy sequences
Optimizes supervision allocation across prompt, reasoning, and answer segments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective distillation focusing only on CoT tokens
Truncation protocol using first 50% of sequence tokens
Reduces training time, memory, and FLOPs by 50%
🔎 Similar Papers
No similar papers found.