Mechanistic Fine-tuning for In-context Learning

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the weak generalization and high training costs of large language models (LLMs) in in-context learning (ICL), this paper proposes Attention Behavior Fine-Tuning (ABFT). Grounded in mechanistic interpretability, ABFT identifies an implicit “induction head preference” in ICL and reformulates it as a lightweight supervision objective—directly optimizing attention scores to enhance focus on correct labels. Unlike conventional fine-tuning, ABFT requires no architectural modifications; instead, it achieves behavior-controllable adaptation via module-level attention behavior regulation. Evaluated across nine LLMs and eight benchmark datasets, ABFT significantly improves ICL performance, robustness, and fairness, while consuming only 0.01% of the training data required by standard fine-tuning—substantially reducing computational overhead. The core contribution lies in the synergistic integration of mechanism-driven objective design and fine-grained attention intervention.

Technology Category

Application Category

📝 Abstract
In-context Learning (ICL) utilizes structured demonstration-query inputs to induce few-shot learning on Language Models (LMs), which are not originally pre-trained on ICL-style data. To bridge the gap between ICL and pre-training, some approaches fine-tune LMs on large ICL-style datasets by an end-to-end paradigm with massive computational costs. To reduce such costs, in this paper, we propose Attention Behavior Fine-Tuning (ABFT), utilizing the previous findings on the inner mechanism of ICL, building training objectives on the attention scores instead of the final outputs, to force the attention scores to focus on the correct label tokens presented in the context and mitigate attention scores from the wrong label tokens. Our experiments on 9 modern LMs and 8 datasets empirically find that ABFT outperforms in performance, robustness, unbiasedness, and efficiency, with only around 0.01% data cost compared to the previous methods. Moreover, our subsequent analysis finds that the end-to-end training objective contains the ABFT objective, suggesting the implicit bias of ICL-style data to the emergence of induction heads. Our work demonstrates the possibility of controlling specific module sequences within LMs to improve their behavior, opening up the future application of mechanistic interpretability.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational costs in fine-tuning LMs for ICL
Improving attention scores to focus on correct label tokens
Enhancing LM performance, robustness, and efficiency with minimal data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes attention scores not final outputs
Reduces data cost to around 0.01%
Improves performance, robustness, and unbiasedness
🔎 Similar Papers
No similar papers found.