Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although self-distillation can enhance the efficiency of large language models, it may inadvertently impair their mathematical reasoning capabilities, particularly on out-of-distribution tasks. This work reveals for the first time that self-distillation suppresses the model’s ability to express epistemic uncertainty—manifested as verbalized uncertainty during reasoning—thereby undermining reasoning robustness. This detrimental effect is especially pronounced in scenarios characterized by low task coverage but rich contextual information. By systematically controlling the context richness and task coverage of the teacher model, we evaluate this mechanism on mainstream architectures such as Qwen3-8B and observe performance degradations of up to 40%, thereby demonstrating that appropriately calibrated expressions of uncertainty are crucial for robust reasoning.

Technology Category

Application Category

📝 Abstract
Self-distillation has emerged as an effective post-training paradigm for LLMs, often improving performance while shortening reasoning traces. However, in mathematical reasoning, we find that it can reduce response length while degrading performance. We trace this degradation to the suppression of epistemic verbalization - the model's expression of uncertainty during reasoning. Through controlled experiments varying conditioning context richness and task coverage, we show that conditioning the teacher on rich information suppresses uncertainty expression, enabling rapid in-domain optimization with limited task coverage but harming OOD performance, where unseen problems benefit from expressing uncertainty and adjusting accordingly. Across Qwen3-8B, DeepSeek-Distill-Qwen-7B, and Olmo3-7B-Instruct, we observe performance drops of up to 40%. Our findings highlight that exposing appropriate levels of uncertainty is crucial for robust reasoning and underscore the importance of optimizing reasoning behavior beyond merely reinforcing correct answer traces.
Problem

Research questions and friction points this paper is trying to address.

self-distillation
reasoning capability
epistemic verbalization
uncertainty expression
out-of-distribution generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

self-distillation
epistemic verbalization
reasoning robustness
out-of-distribution generalization
uncertainty expression
🔎 Similar Papers
No similar papers found.