Transformers through the lens of support-preserving maps between measures

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the representational capacity of Transformers in the space of probability measures, aiming to characterize their intrinsic nature as “context mappings.” Method: Leveraging measure-theoretic modeling, Wasserstein geometry, mean-field limits, and nonlocal transport equations, we rigorously analyze Transformer-induced mappings between probability measures. Contribution/Results: We provide the first complete characterization: any such mapping must preserve the cardinality of the support set, and its Fréchet derivative’s regular part must be uniformly continuous. Moreover, Transformers are shown to be universal approximators of arbitrary continuous context mappings. Crucially, their infinite-depth limit converges strictly to the dynamical flow governed by the Vlasov equation. This establishes a fundamental connection between large language models’ expressive power and physically motivated dynamical systems, offering a novel mathematical framework for understanding the structural properties of self-attention mechanisms.

Technology Category

Application Category

📝 Abstract
Transformers are deep architectures that define ``in-context maps'' which enable predicting new tokens based on a given set of tokens (such as a prompt in NLP applications or a set of patches for a vision transformer). In previous work, we studied the ability of these architectures to handle an arbitrarily large number of context tokens. To mathematically, uniformly analyze their expressivity, we considered the case that the mappings are conditioned on a context represented by a probability distribution which becomes discrete for a finite number of tokens. Modeling neural networks as maps on probability measures has multiple applications, such as studying Wasserstein regularity, proving generalization bounds and doing a mean-field limit analysis of the dynamics of interacting particles as they go through the network. In this work, we study the question what kind of maps between measures are transformers. We fully characterize the properties of maps between measures that enable these to be represented in terms of in-context maps via a push forward. On the one hand, these include transformers; on the other hand, transformers universally approximate representations with any continuous in-context map. These properties are preserving the cardinality of support and that the regular part of their Fréchet derivative is uniformly continuous. Moreover, we show that the solution map of the Vlasov equation, which is of nonlocal transport type, for interacting particle systems in the mean-field regime for the Cauchy problem satisfies the conditions on the one hand and, hence, can be approximated by a transformer; on the other hand, we prove that the measure-theoretic self-attention has the properties that ensure that the infinite depth, mean-field measure-theoretic transformer can be identified with a Vlasov flow.
Problem

Research questions and friction points this paper is trying to address.

Characterizing transformer architectures as support-preserving maps between probability measures
Establishing conditions for universal approximation of continuous in-context maps
Connecting measure-theoretic transformers with Vlasov equation solutions in mean-field regimes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modeling transformers as maps between probability measures
Characterizing support-preserving maps via push forward
Approximating Vlasov equation solutions with transformers
🔎 Similar Papers
No similar papers found.