Out-of-Distribution Generalization of In-Context Learning: A Low-Dimensional Subspace Perspective

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the out-of-distribution (OOD) generalization mechanism of in-context learning (ICL) in linear regression settings where the task covariance matrix exhibits low-rank structure. We model distributional shift as angular deviation between training and test task subspaces. From a subspace geometric perspective, we establish that zero-shot generalization is achievable with long prompts when novel task vectors lie within the span of the training subspace; conversely, single-layer linear attention remains sensitive to subspace angles yet retains cross-subspace transfer capability. We theoretically prove that OOD generalization hinges on whether task vectors reside in the training subspace’s span—a finding empirically validated on GPT-2. Furthermore, we demonstrate that LoRA effectively captures distributional shifts, substantially enhancing ICL’s OOD robustness.

Technology Category

Application Category

📝 Abstract
This work aims to demystify the out-of-distribution (OOD) capabilities of in-context learning (ICL) by studying linear regression tasks parameterized with low-rank covariance matrices. With such a parameterization, we can model distribution shifts as a varying angle between the subspace of the training and testing covariance matrices. We prove that a single-layer linear attention model incurs a test risk with a non-negligible dependence on the angle, illustrating that ICL is not robust to such distribution shifts. However, using this framework, we also prove an interesting property of ICL: when trained on task vectors drawn from a union of low-dimensional subspaces, ICL can generalize to any subspace within their span, given sufficiently long prompt lengths. This suggests that the OOD generalization ability of Transformers may actually stem from the new task lying within the span of those encountered during training. We empirically show that our results also hold for models such as GPT-2, and conclude with (i) experiments on how our observations extend to nonlinear function classes and (ii) results on how LoRA has the ability to capture distribution shifts.
Problem

Research questions and friction points this paper is trying to address.

Study OOD generalization in ICL via low-rank covariance matrices
Analyze ICL robustness to distribution shifts using subspace angles
Explore ICL generalization within training subspace spans
Innovation

Methods, ideas, or system contributions that make the work stand out.

Studies ICL OOD via low-rank covariance matrices
Proves ICL generalizes within subspace span
Validates findings with GPT-2 and LoRA
🔎 Similar Papers
No similar papers found.