🤖 AI Summary
This paper addresses the fundamental question of why large language models (LLMs) generalize to unseen tasks via in-context learning (ICL) without parameter updates. We propose that ICL is intrinsically *implicit knowledge distillation at inference time*: prompt examples dynamically induce the model to construct a task-specific “reference model.” Methodologically, we formalize the inference-time attention mechanism as a knowledge transfer process; derive a generalization bound based on Rademacher complexity; and establish an MMD-based theoretical framework to characterize how prompt distribution shift affects performance. Our unified framework explains diverse ICL phenomena—including task adaptation, example ordering sensitivity, and cross-task interference—by bridging gradient-based, distributional, and attention-based perspectives. Theoretically grounded and empirically verifiable, it provides principled foundations for prompt engineering and automatic example selection.
📝 Abstract
In-context learning (ICL) allows large language models (LLMs) to solve novel tasks without weight updates. Despite its empirical success, the mechanism behind ICL remains poorly understood, limiting our ability to interpret, improve, and reliably apply it. In this paper, we propose a new theoretical perspective that interprets ICL as an implicit form of knowledge distillation (KD), where prompt demonstrations guide the model to form a task-specific reference model during inference. Under this view, we derive a Rademacher complexity-based generalization bound and prove that the bias of the distilled weights grows linearly with the Maximum Mean Discrepancy (MMD) between the prompt and target distributions. This theoretical framework explains several empirical phenomena and unifies prior gradient-based and distributional analyses. To the best of our knowledge, this is the first to formalize inference-time attention as a distillation process, which provides theoretical insights for future prompt engineering and automated demonstration selection.