Softmax as Linear Attention in the Large-Prompt Regime: a Measure-based Perspective

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the theoretical analysis of Softmax attention under both finite and infinite-context regimes. We establish, for the first time from a measure-theoretic perspective, that as context length tends to infinity, single-layer Softmax attention converges strongly—almost surely—to a linear operator determined by the empirical measure of input tokens. Our unified non-asymptotic framework yields explicit convergence rate bounds for both outputs and gradients, characterizes concentration stability throughout training, and rigorously justifies linearized dynamics. Crucially, we bridge linear and Softmax attention theories: beyond revealing the intrinsic linearity of Softmax attention in the long-context limit, our analysis enables direct transfer of linear attention optimization theory to large-context settings. This provides a rigorous foundation for modeling and analyzing in-context learning and training dynamics in transformer-based architectures.

Technology Category

Application Category

📝 Abstract
Softmax attention is a central component of transformer architectures, yet its nonlinear structure poses significant challenges for theoretical analysis. We develop a unified, measure-based framework for studying single-layer softmax attention under both finite and infinite prompts. For i.i.d. Gaussian inputs, we lean on the fact that the softmax operator converges in the infinite-prompt limit to a linear operator acting on the underlying input-token measure. Building on this insight, we establish non-asymptotic concentration bounds for the output and gradient of softmax attention, quantifying how rapidly the finite-prompt model approaches its infinite-prompt counterpart, and prove that this concentration remains stable along the entire training trajectory in general in-context learning settings with sub-Gaussian tokens. In the case of in-context linear regression, we use the tractable infinite-prompt dynamics to analyze training at finite prompt length. Our results allow optimization analyses developed for linear attention to transfer directly to softmax attention when prompts are sufficiently long, showing that large-prompt softmax attention inherits the analytical structure of its linear counterpart. This, in turn, provides a principled and broadly applicable toolkit for studying the training dynamics and statistical behavior of softmax attention layers in large prompt regimes.
Problem

Research questions and friction points this paper is trying to address.

Analyzes softmax attention convergence to linear operator
Establishes concentration bounds for output and gradient
Transfers linear attention optimization analyses to softmax attention
Innovation

Methods, ideas, or system contributions that make the work stand out.

Softmax converges to linear operator for infinite prompts
Non-asymptotic bounds quantify finite-to-infinite prompt convergence
Large-prompt softmax inherits linear attention's analytical structure
🔎 Similar Papers
No similar papers found.