Revisiting In-context Learning Inference Circuit in Large Language Models

📅 2024-10-06
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The inference mechanism underlying in-context learning (ICL) in large language models remains poorly understood. Method: This paper proposes the first interpretable and verifiable three-stage unified reasoning circuit: input encoding → semantic fusion → feature retrieval and copying. We employ linear representation analysis of hidden states, task-subspace similarity search, attention decomposition, and systematic ablation experiments to uncover ICL’s dual-track mechanism—dominated by a primary circuit while concurrently supporting multiple auxiliary solution pathways. Contribution/Results: Our framework consistently reproduces and unifies diverse ICL phenomena. Ablation of any single stage causes significant performance degradation, confirming the necessity of the primary circuit. This work achieves the first structural modeling and mechanistic attribution of ICL’s dynamic inference process, establishing a theoretical foundation for controllable reasoning and model editing.

Technology Category

Application Category

📝 Abstract
In-context Learning (ICL) is an emerging few-shot learning paradigm on Language Models (LMs) with inner mechanisms un-explored. There are already existing works describing the inner processing of ICL, while they struggle to capture all the inference phenomena in large language models. Therefore, this paper proposes a comprehensive circuit to model the inference dynamics and try to explain the observed phenomena of ICL. In detail, we divide ICL inference into 3 major operations: (1) Input Text Encode: LMs encode every input text (demonstrations and queries) into linear representation in the hidden states with sufficient information to solve ICL tasks. (2) Semantics Merge: LMs merge the encoded representations of demonstrations with their corresponding label tokens to produce joint representations of labels and demonstrations. (3) Feature Retrieval and Copy: LMs search the joint representations similar to the query representation on a task subspace, and copy the searched representations into the query. Then, language model heads capture these copied label representations to a certain extent and decode them into predicted labels. The proposed inference circuit successfully captured many phenomena observed during the ICL process, making it a comprehensive and practical explanation of the ICL inference process. Moreover, ablation analysis by disabling the proposed steps seriously damages the ICL performance, suggesting the proposed inference circuit is a dominating mechanism. Additionally, we confirm and list some bypass mechanisms that solve ICL tasks in parallel with the proposed circuit.
Problem

Research questions and friction points this paper is trying to address.

Large Models
In-context Learning
Reasoning Mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Contextual Learning
Model Optimization
ICL Mechanism Understanding
🔎 Similar Papers
No similar papers found.