Updating Parametric Knowledge with Context Distillation Retains Post-Training Capabilities

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of catastrophic forgetting in large language models during continual learning, where instruction-following and reasoning capabilities acquired after pretraining are often degraded when new knowledge is incorporated. To reconcile knowledge updating with capability retention, the authors propose DiSC (Distillation in Split Context), a novel continual adaptation method that enables efficient knowledge distillation without explicit generation. DiSC constructs teacher and student distributions by partitioning the context and minimizes the KL divergence over shared tokens between these distributions. Extensive experiments across four post-trained models and two adaptation domains demonstrate that DiSC consistently outperforms existing fine-tuning and distillation approaches, achieving the best trade-off between acquiring new knowledge and preserving previously learned abilities.

Technology Category

Application Category

📝 Abstract
Post-training endows pretrained LLMs with a variety of desirable skills, including instruction-following, reasoning, and others. However, these post-trained LLMs only encode knowledge up to a cut-off date, necessitating continual adaptation. Unfortunately, existing solutions cannot simultaneously learn new knowledge from an adaptation document corpora and mitigate the forgetting of earlier learned capabilities. To address this, we introduce Distillation via Split Contexts (DiSC), a simple context-distillation based approach for continual knowledge adaptation. \methodname~derives student and teacher distributions by conditioning on distinct segments of the training example and minimizes the KL divergence between the shared tokens. This allows us to efficiently apply context-distillation without requiring explicit generation steps during training. We run experiments on four post-trained models and two adaptation domains. Compared to prior finetuning and distillation methods for continual adaptation, DiSC consistently reports the best trade-off between learning new knowledge and mitigating forgetting of previously learned skills like instruction-following, reasoning, and factual knowledge.
Problem

Research questions and friction points this paper is trying to address.

continual adaptation
knowledge updating
catastrophic forgetting
post-training capabilities
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

context distillation
continual adaptation
knowledge updating
KL divergence
large language models
🔎 Similar Papers
No similar papers found.