Pretrain-Test Task Alignment Governs Generalization in In-Context Learning

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how the structural design of pretraining tasks affects in-context learning (ICL) generalization, particularly addressing performance degradation under structural mismatch between pretraining and downstream tasks. Method: We introduce *task alignment* as the core mechanism, formalizing a quantifiable and interpretable metric for task alignment. Leveraging a linear-regression-solvable model with linear attention, we theoretically derive a generalization error bound under high-dimensional covariance mismatch, then extend the analysis to nonlinear Transformers. Contribution/Results: Our alignment metric accurately predicts ICL performance across diverse settings. Experiments reveal a fundamental trade-off between task diversity during pretraining and ICL generalization. This is the first work to unify ICL generalization phenomena through a structural alignment lens, providing both theoretical foundations and quantitative guidance for designing pretraining objectives in large language models.

Technology Category

Application Category

📝 Abstract
In-context learning (ICL) is a central capability of Transformer models, but the structures in data that enable its emergence and govern its robustness remain poorly understood. In this work, we study how the structure of pretraining tasks governs generalization in ICL. Using a solvable model for ICL of linear regression by linear attention, we derive an exact expression for ICL generalization error in high dimensions under arbitrary pretraining-testing task covariance mismatch. This leads to a new alignment measure that quantifies how much information about the pretraining task distribution is useful for inference at test time. We show that this measure directly predicts ICL performance not only in the solvable model but also in nonlinear Transformers. Our analysis further reveals a tradeoff between specialization and generalization in ICL: depending on task distribution alignment, increasing pretraining task diversity can either improve or harm test performance. Together, these results identify train-test task alignment as a key determinant of generalization in ICL.
Problem

Research questions and friction points this paper is trying to address.

Analyzing how pretraining task structure affects in-context learning generalization
Developing alignment measures to quantify pretraining-test task usefulness
Identifying task alignment as key determinant of ICL generalization performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Solvable linear model for ICL generalization analysis
Novel alignment measure quantifying pretraining-test task usefulness
Revealing specialization-generalization tradeoff in task diversity
🔎 Similar Papers
No similar papers found.