🤖 AI Summary
This work investigates how large language models (LLMs) acquire in-context learning (ICL) meta-learning capabilities—i.e., the ability to induce task rules from demonstrations and generalize to unseen inputs—during pretraining, rather than merely memorizing or copying outputs. It challenges the prevailing “induction head” single-stage mutation hypothesis.
Method: We propose a multi-stage circuit evolution hypothesis grounded in Transformer architecture and design an extended ICL benchmark that decouples task reasoning from generalization evaluation. Using dynamic attribution tracking and fine-grained circuit analysis, we trace the emergence of ICL-relevant computations across training stages.
Contribution/Results: We provide the first empirical evidence that ICL meta-learning arises from staged, structurally specific computational circuit reconfiguration—not abrupt, monolithic changes. This finding unifies explanations for critical phenomena including cross-task transfer and generalization phase transitions, substantially advancing our mechanistic understanding of ICL in Transformers.
📝 Abstract
Transformer-based language models exhibit In-Context Learning (ICL), where predictions are made adaptively based on context. While prior work links induction heads to ICL through a sudden jump in accuracy, this can only account for ICL when the answer is included within the context. However, an important property of practical ICL in large language models is the ability to meta-learn how to solve tasks from context, rather than just copying answers from context; how such an ability is obtained during training is largely unexplored. In this paper, we experimentally clarify how such meta-learning ability is acquired by analyzing the dynamics of the model's circuit during training. Specifically, we extend the copy task from previous research into an In-Context Meta Learning setting, where models must infer a task from examples to answer queries. Interestingly, in this setting, we find that there are multiple phases in the process of acquiring such abilities, and that a unique circuit emerges in each phase, contrasting with the single-phases change in induction heads. The emergence of such circuits can be related to several phenomena known in large language models, and our analysis lead to a deeper understanding of the source of the transformer's ICL ability.