PACED: Distillation at the Frontier of Student Competence

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of standard large language model distillation, which wastes computational resources on questions either already mastered or far beyond the student’s capability, causing the signal-to-noise ratio of distillation gradients to vanish near pass-rate extremes. To remedy this, the authors propose the PACED framework, which leverages the boundary-vanishing structure of distillation gradients to design a pass-rate weighting function $w(p) = p^\alpha(1 - p)^\beta$, focusing learning on the student’s proximal development zone. Theoretically, this Beta-kernel weighting arises naturally from the signal-to-noise ratio and exhibits minimax robustness. Requiring only forward inference to estimate pass rates—without architectural modifications—and compatible with any KL divergence direction, PACED integrates pass-rate-weighted distillation, a two-stage forward/reverse KL training scheme, and student self-assessment of competence boundaries. It consistently outperforms baselines in both teacher-student and self-distillation settings, significantly boosting performance on standard reasoning benchmarks while effectively mitigating knowledge forgetting.

Technology Category

Application Category

📝 Abstract
Standard LLM distillation wastes compute on two fronts: problems the student has already mastered (near-zero gradients) and problems far beyond its reach (incoherent gradients that erode existing capabilities). We show that this waste is not merely intuitive but structurally inevitable: the gradient signal-to-noise ratio in distillation provably vanishes at both pass-rate extremes. This theoretical observation leads to Paced, a framework that concentrates distillation on the zone of proximal development -- the frontier of a student model's competence -- via a principled pass-rate weight $w(p) = p^α(1 - p)^β$ derived from the boundary-vanishing structure of distillation gradients. Key results: (1) Theory: We prove that the Beta kernel $w(p) = p^α(1-p)^β$ is a leading-order weight family arising from the SNR structure of distillation, and that it is minimax-robust -- under bounded multiplicative misspecification, worst-case efficiency loss is only $O(δ^2)$. (2)Distillation: On distillation from a larger teacher to a smaller student model with forward KL, Paced achieves significant gain over the base model, while keeping benchmark forgetting at a low level. (3)Self-distillation: On instruction-tuned models with reverse KL, gains are exceeding baselines as well. (4)Two-stage synergy: A forward-KL-then-reverse-KL schedule yields the strongest results in our setting, reaching substantial improvements on standard reasoning benchmarks -- supporting a mode-coverage-then-consolidation interpretation of the distillation process. All configurations require only student rollouts to estimate pass rates, need no architectural changes, and are compatible with any KL direction.
Problem

Research questions and friction points this paper is trying to address.

distillation
student competence
gradient signal-to-noise ratio
zone of proximal development
language model
Innovation

Methods, ideas, or system contributions that make the work stand out.

distillation
zone of proximal development
gradient signal-to-noise ratio
Beta kernel weighting
knowledge distillation
🔎 Similar Papers
No similar papers found.