In-Context Learning Without Copying

📅 2025-11-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether induction heads are necessary for in-context learning (ICL) in Transformers. To test this, we propose *Hapax training*: a mechanistic intervention that dynamically masks the loss contribution of tokens predictable by induction heads during training, thereby causally attenuating their copying capability. Experiments show that, despite a 31.7% reduction in induction-head loss contribution, the model outperforms the baseline on 13 of 21 downstream ICL tasks—and exhibits significantly lower prediction loss at non-replicable positions. Attention analysis further reveals that alternative pathways—such as semantic alignment and pattern generalization—can sustain ICL independently of induction-based copying. This is the first study to demonstrate, via mechanistic interpretability-guided training, that ICL can operate without induction heads, uncovering a previously unrecognized, non-inductive mechanism. Our findings challenge the prevailing assumption of induction heads as essential for ICL and offer a new perspective on the fundamental learning principles underlying large language models.

Technology Category

Application Category

📝 Abstract
Induction heads are attention heads that perform inductive copying by matching patterns from earlier context and copying their continuations verbatim. As models develop induction heads, they often experience a sharp drop in training loss, a phenomenon cited as evidence that induction heads may serve as a prerequisite for more complex in-context learning (ICL) capabilities. In this work, we ask whether transformers can still acquire ICL capabilities when inductive copying is suppressed. We propose Hapax, a setting where we omit the loss contribution of any token that can be correctly predicted by induction heads. Despite a significant reduction in inductive copying, performance on abstractive ICL tasks (i.e., tasks where the answer is not contained in the input context) remains comparable and surpasses the vanilla model on 13 of 21 tasks, even though 31.7% of tokens are omitted from the loss. Furthermore, our model achieves lower loss values on token positions that cannot be predicted correctly by induction heads. Mechanistic analysis further shows that models trained with Hapax develop fewer and weaker induction heads but still preserve ICL capabilities. Taken together, our findings indicate that inductive copying is not essential for learning abstractive ICL mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Investigating if transformers learn in-context learning without inductive copying
Proposing Hapax to suppress copying by omitting certain loss contributions
Demonstrating preserved abstractive ICL capabilities despite reduced induction heads
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hapax omits loss from induction-predicted tokens
It reduces induction heads while preserving ICL
Enables abstractive ICL without inductive copying
🔎 Similar Papers
No similar papers found.