π€ AI Summary
This work addresses the lack of theoretical understanding regarding convergence rates and generalization guarantees in in-context learning (ICL). We conduct the first systematic analysis of approximation and convergence properties for single-layer Transformers in next-token prediction under both noiseless and noisy settings. Leveraging a finite-sample analysis framework based on gradient descent, combined with linear/ReLU attention modeling and distribution-aware learning theory, we rigorously establish that the expected loss converges to the Bayes risk at a linear rate, with controllable generalization errorβand further prove that the model achieves Bayes-optimality for this task. Empirical evaluations confirm that the model reproduces canonical ICL phenomena. Our work fills a critical theoretical gap in ICL by providing the first convergence-rate and generalization guarantees for Transformer-based ICL, offering novel insights into the implicit learning mechanisms of Transformers.
π Abstract
We study the approximation capabilities and on-convergence behaviors of one-layer transformers on the noiseless and noisy in-context reasoning of next-token prediction. Existing theoretical results focus on understanding the in-context reasoning behaviors for either the first gradient step or when the number of samples is infinite. Furthermore, no convergence rates nor generalization abilities were known. Our work addresses these gaps by showing that there exists a class of one-layer transformers that are provably Bayes-optimal with both linear and ReLU attention. When being trained with gradient descent, we show via a finite-sample analysis that the expected loss of these transformers converges at linear rate to the Bayes risk. Moreover, we prove that the trained models generalize to unseen samples as well as exhibit learning behaviors that were empirically observed in previous works. Our theoretical findings are further supported by extensive empirical validations.