Optimal Attention Temperature Enhances In-Context Learning under Distribution Shift

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretrained Transformers exhibit substantially degraded in-context learning (ICL) performance under distributional shift, hindering real-world deployment. This work provides the first theoretical analysis of how attention temperature influences ICL generalization error under distribution shift, proving the existence of an optimal temperature that minimizes this error and deriving a closed-form expression for the generalization error within a linearized softmax framework. Empirical validation—including linear regression simulations and cross-distribution question-answering experiments on GPT-2 and LLaMA2-7B—demonstrates that tuning attention temperature significantly improves ICL robustness. The study thus delivers both theoretical grounding and a practical, parameter-efficient mechanism for attention calibration. By bridging theory and practice, it offers a novel pathway to enhance few-shot adaptation capabilities of large language models in out-of-distribution scenarios.

Technology Category

Application Category

📝 Abstract
Pretrained Transformers excel at in-context learning (ICL), inferring new tasks from only a handful of examples. Yet, their ICL performance can degrade sharply under distribution shift between pretraining and test data, a regime increasingly common in real-world deployments. While recent empirical work hints that adjusting the attention temperature in the softmax can enhance Transformer performance, the attention temperature's role in ICL under distribution shift remains unexplored. This paper provides the first theoretical and empirical study of attention temperature for ICL under distribution shift. Using a simplified but expressive "linearized softmax" framework, we derive closed-form generalization error expressions and prove that shifts in input covariance or label noise substantially impair ICL, but that an optimal attention temperature exists which minimizes this error. We then validate our predictions through extensive simulations on linear regression tasks and large-scale experiments with GPT-2 and LLaMA2-7B on question-answering benchmarks. Our results establish attention temperature as a principled and powerful mechanism for improving the robustness of ICL in pretrained Transformers, advancing theoretical understanding and providing actionable guidance for selecting attention temperature in practice.
Problem

Research questions and friction points this paper is trying to address.

Optimizing attention temperature to improve in-context learning robustness
Addressing performance degradation under pretraining-test distribution shifts
Establishing theoretical framework for attention temperature in distribution shifts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal attention temperature minimizes ICL generalization error
Linearized softmax framework enables closed-form error analysis
Attention temperature enhances Transformer robustness under distribution shift
🔎 Similar Papers
Samet Demir
Samet Demir
Koç University
machine learningoptimizationstatistics
Z
Zafer Doğan
MLIP Research Group, KUIS AI Center, Koc University; Department of EEE, Koc University