Transformers Can Overcome the Curse of Dimensionality: A Theoretical Study from an Approximation Perspective

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the curse of dimensionality faced by Transformers in high-dimensional function approximation. From the perspective of approximation theory, it provides the first rigorous proof that Transformers can overcome this curse for Hölder-continuous functions with exponent β. Methodologically, it constructs a context-free theoretical framework based on the Kolmogorov–Arnold representation theorem and designs a minimal architecture comprising only a single-head Softmax self-attention layer and several feed-forward layers. Key contributions include: (1) feed-forward layer width reduced to a constant—dependent on activation choice (e.g., floor or ReLU); (2) total depth improved to O(log(1/ε)), significantly better than prior results; and (3) width upper bound tightened to O(ε⁻²⁄ᵝ log(1/ε)) for approximation accuracy ε. This is the first work to establish optimal approximation rates for Transformers without contextual assumptions, rigorously characterizing their expressive power.

Technology Category

Application Category

📝 Abstract
The Transformer model is widely used in various application areas of machine learning, such as natural language processing. This paper investigates the approximation of the H""older continuous function class $mathcal{H}_{Q}^{eta}left([0,1]^{d imes n},mathbb{R}^{d imes n} ight)$ by Transformers and constructs several Transformers that can overcome the curse of dimensionality. These Transformers consist of one self-attention layer with one head and the softmax function as the activation function, along with several feedforward layers. For example, to achieve an approximation accuracy of $epsilon$, if the activation functions of the feedforward layers in the Transformer are ReLU and floor, only $mathcal{O}left(logfrac{1}{epsilon} ight)$ layers of feedforward layers are needed, with widths of these layers not exceeding $mathcal{O}left(frac{1}{epsilon^{2/eta}}logfrac{1}{epsilon} ight)$. If other activation functions are allowed in the feedforward layers, the width of the feedforward layers can be further reduced to a constant. These results demonstrate that Transformers have a strong expressive capability. The construction in this paper is based on the Kolmogorov-Arnold Representation Theorem and does not require the concept of contextual mapping, hence our proof is more intuitively clear compared to previous Transformer approximation works. Additionally, the translation technique proposed in this paper helps to apply the previous approximation results of feedforward neural networks to Transformer research.
Problem

Research questions and friction points this paper is trying to address.

Transformers approximate Hölder continuous functions efficiently
Overcoming curse of dimensionality with minimal layers
Enhancing expressive capability using Kolmogorov-Arnold Theorem
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers use self-attention and feedforward layers
ReLU and floor activations reduce layer count
Kolmogorov-Arnold Theorem simplifies proof clarity
🔎 Similar Papers
No similar papers found.
Y
Yuling Jiao
School of Artificial Intelligence, Wuhan University, Wuhan , 430072, Hubei Province, China; Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan, 430072, Hubei Province, China
Yanming Lai
Yanming Lai
Department of Mathematics, The Hong Kong University of Science and Technology
Applied mathematics
Y
Yang Wang
Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
B
Bokai Yan
Department of Mathematics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China