🤖 AI Summary
Pretrained Transformers exhibit substantially degraded in-context learning (ICL) performance under distributional shift, hindering real-world deployment. This work provides the first theoretical analysis of how attention temperature influences ICL generalization error under distribution shift, proving the existence of an optimal temperature that minimizes this error and deriving a closed-form expression for the generalization error within a linearized softmax framework. Empirical validation—including linear regression simulations and cross-distribution question-answering experiments on GPT-2 and LLaMA2-7B—demonstrates that tuning attention temperature significantly improves ICL robustness. The study thus delivers both theoretical grounding and a practical, parameter-efficient mechanism for attention calibration. By bridging theory and practice, it offers a novel pathway to enhance few-shot adaptation capabilities of large language models in out-of-distribution scenarios.
📝 Abstract
Pretrained Transformers excel at in-context learning (ICL), inferring new tasks from only a handful of examples. Yet, their ICL performance can degrade sharply under distribution shift between pretraining and test data, a regime increasingly common in real-world deployments. While recent empirical work hints that adjusting the attention temperature in the softmax can enhance Transformer performance, the attention temperature's role in ICL under distribution shift remains unexplored. This paper provides the first theoretical and empirical study of attention temperature for ICL under distribution shift. Using a simplified but expressive "linearized softmax" framework, we derive closed-form generalization error expressions and prove that shifts in input covariance or label noise substantially impair ICL, but that an optimal attention temperature exists which minimizes this error. We then validate our predictions through extensive simulations on linear regression tasks and large-scale experiments with GPT-2 and LLaMA2-7B on question-answering benchmarks. Our results establish attention temperature as a principled and powerful mechanism for improving the robustness of ICL in pretrained Transformers, advancing theoretical understanding and providing actionable guidance for selecting attention temperature in practice.