Provable test-time adaptivity and distributional robustness of in-context learning

📅 2025-10-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the nonparametric generalization capability of pretrained Transformers in in-context learning under distributional shift and varying task difficulty. We consider test distributions that deviate from the pretraining prior, assuming tasks satisfy stochastic smoothness and multi-index structure. To address this, we propose a distributionally robust analysis framework constrained by χ²-divergence, jointly modeling stochastic effective dimensionality and smoothness. Theoretically, we prove that pretrained Transformers automatically adapt to task difficulty at test time and achieve the optimal nonparametric convergence rate—matching the intrinsic difficulty of the target task—even under distributional shift. Crucially, their performance is provably no worse than that of the best estimator tailored to any known test distribution, thereby surpassing classical minimax lower bounds. This constitutes the first rigorous nonparametric convergence guarantee for adaptive and robust generalization in large-model in-context learning.

Technology Category

Application Category

📝 Abstract
We study in-context learning problems where a Transformer is pretrained on tasks drawn from a mixture distribution $π=sum_{αinmathcal{A}} λ_α π_α$, called the pretraining prior, in which each mixture component $π_α$ is a distribution on tasks of a specific difficulty level indexed by $α$. Our goal is to understand the performance of the pretrained Transformer when evaluated on a different test distribution $μ$, consisting of tasks of fixed difficulty $βinmathcal{A}$, and with potential distribution shift relative to $π_β$, subject to the chi-squared divergence $χ^2(μ,π_β)$ being at most $κ$. In particular, we consider nonparametric regression problems with random smoothness, and multi-index models with random smoothness as well as random effective dimension. We prove that a large Transformer pretrained on sufficient data achieves the optimal rate of convergence corresponding to the difficulty level $β$, uniformly over test distributions $μ$ in the chi-squared divergence ball. Thus, the pretrained Transformer is able to achieve faster rates of convergence on easier tasks and is robust to distribution shift at test time. Finally, we prove that even if an estimator had access to the test distribution $μ$, the convergence rate of its expected risk over $μ$ could not be faster than that of our pretrained Transformers, thereby providing a more appropriate optimality guarantee than minimax lower bounds.
Problem

Research questions and friction points this paper is trying to address.

Analyzing Transformer's test-time adaptability under distribution shifts
Proving optimal convergence rates for multi-index regression tasks
Establishing robustness guarantees for in-context learning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer pretrained on mixture distribution tasks
Achieves optimal convergence rate for test difficulty
Robust to distribution shift with chi-squared constraint
🔎 Similar Papers
No similar papers found.
T
Tianyi Ma
Statistical Laboratory, University of Cambridge
Tengyao Wang
Tengyao Wang
Professor in Statistics at London School of Economics
statistical theory and methodology
R
Richard J. Samworth
Statistical Laboratory, University of Cambridge