🤖 AI Summary
This work investigates the internal mechanisms of in-context learning (ICL) in large language models (LLMs), focusing on the Gemma-2 2B model. Addressing the open question of how ICL extracts and integrates task information from a single example, we propose a two-stage hierarchical computation: *contextualization*—dynamic modulation of individual example representations conditioned on preceding examples—and *aggregation*—cross-example integration to infer the task and generate predictions. Using causal tracing, attention flow analysis, and inter-layer activation intervention, we empirically characterize the neural circuitry of ICL across five natural language tasks. We precisely identify critical inter-layer connectivity patterns and explicitly dissociate contextualization (ambiguity-sensitive and task-dependent) from aggregation functions. This yields the first interpretable, mechanistic circuit diagram of ICL grounded in Gemma-2, providing foundational insights into how LLMs perform few-shot reasoning without parameter updates.
📝 Abstract
In-Context Learning (ICL) is an intriguing ability of large language models (LLMs). Despite a substantial amount of work on its behavioral aspects and how it emerges in miniature setups, it remains unclear which mechanism assembles task information from the individual examples in a fewshot prompt. We use causal interventions to identify information flow in Gemma-2 2B for five naturalistic ICL tasks. We find that the model infers task information using a two-step strategy we call contextualize-then-aggregate: In the lower layers, the model builds up representations of individual fewshot examples, which are contextualized by preceding examples through connections between fewshot input and output tokens across the sequence. In the higher layers, these representations are aggregated to identify the task and prepare prediction of the next output. The importance of the contextualization step differs between tasks, and it may become more important in the presence of ambiguous examples. Overall, by providing rigorous causal analysis, our results shed light on the mechanisms through which ICL happens in language models.