🤖 AI Summary
This work addresses the theoretical analysis of Softmax attention under both finite and infinite-context regimes. We establish, for the first time from a measure-theoretic perspective, that as context length tends to infinity, single-layer Softmax attention converges strongly—almost surely—to a linear operator determined by the empirical measure of input tokens. Our unified non-asymptotic framework yields explicit convergence rate bounds for both outputs and gradients, characterizes concentration stability throughout training, and rigorously justifies linearized dynamics. Crucially, we bridge linear and Softmax attention theories: beyond revealing the intrinsic linearity of Softmax attention in the long-context limit, our analysis enables direct transfer of linear attention optimization theory to large-context settings. This provides a rigorous foundation for modeling and analyzing in-context learning and training dynamics in transformer-based architectures.
📝 Abstract
Softmax attention is a central component of transformer architectures, yet its nonlinear structure poses significant challenges for theoretical analysis. We develop a unified, measure-based framework for studying single-layer softmax attention under both finite and infinite prompts. For i.i.d. Gaussian inputs, we lean on the fact that the softmax operator converges in the infinite-prompt limit to a linear operator acting on the underlying input-token measure. Building on this insight, we establish non-asymptotic concentration bounds for the output and gradient of softmax attention, quantifying how rapidly the finite-prompt model approaches its infinite-prompt counterpart, and prove that this concentration remains stable along the entire training trajectory in general in-context learning settings with sub-Gaussian tokens. In the case of in-context linear regression, we use the tractable infinite-prompt dynamics to analyze training at finite prompt length. Our results allow optimization analyses developed for linear attention to transfer directly to softmax attention when prompts are sufficiently long, showing that large-prompt softmax attention inherits the analytical structure of its linear counterpart. This, in turn, provides a principled and broadly applicable toolkit for studying the training dynamics and statistical behavior of softmax attention layers in large prompt regimes.