One-Layer Transformers are Provably Optimal for In-context Reasoning and Distributional Association Learning in Next-Token Prediction Tasks

πŸ“… 2025-05-21
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the lack of theoretical understanding regarding convergence rates and generalization guarantees in in-context learning (ICL). We conduct the first systematic analysis of approximation and convergence properties for single-layer Transformers in next-token prediction under both noiseless and noisy settings. Leveraging a finite-sample analysis framework based on gradient descent, combined with linear/ReLU attention modeling and distribution-aware learning theory, we rigorously establish that the expected loss converges to the Bayes risk at a linear rate, with controllable generalization errorβ€”and further prove that the model achieves Bayes-optimality for this task. Empirical evaluations confirm that the model reproduces canonical ICL phenomena. Our work fills a critical theoretical gap in ICL by providing the first convergence-rate and generalization guarantees for Transformer-based ICL, offering novel insights into the implicit learning mechanisms of Transformers.

Technology Category

Application Category

πŸ“ Abstract
We study the approximation capabilities and on-convergence behaviors of one-layer transformers on the noiseless and noisy in-context reasoning of next-token prediction. Existing theoretical results focus on understanding the in-context reasoning behaviors for either the first gradient step or when the number of samples is infinite. Furthermore, no convergence rates nor generalization abilities were known. Our work addresses these gaps by showing that there exists a class of one-layer transformers that are provably Bayes-optimal with both linear and ReLU attention. When being trained with gradient descent, we show via a finite-sample analysis that the expected loss of these transformers converges at linear rate to the Bayes risk. Moreover, we prove that the trained models generalize to unseen samples as well as exhibit learning behaviors that were empirically observed in previous works. Our theoretical findings are further supported by extensive empirical validations.
Problem

Research questions and friction points this paper is trying to address.

Analyzing approximation capabilities of one-layer transformers
Proving Bayes-optimality for in-context reasoning tasks
Establishing convergence rates and generalization abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-layer transformers achieve Bayes-optimal performance
Linear and ReLU attention enable optimal convergence
Finite-sample analysis shows linear convergence rate
πŸ”Ž Similar Papers
No similar papers found.
Q
Quan Nguyen
Department of Computer Science, University of Victoria, Canada
Thanh Nguyen-Tang
Thanh Nguyen-Tang
Johns Hopkins University
Machine Learning