🤖 AI Summary
This paper investigates whether the self-attention mechanism—relying solely on linear transformations and the Softmax nonlinearity—can serve as a universal approximator for continuous sequence-to-sequence functions over compact domains.
Method: Through interpolation analysis and explicit construction of attention weights, the authors establish approximation guarantees by proving that self-attention can uniformly approximate generalized ReLU functions—a key technical step enabling universal approximation without auxiliary components.
Contribution/Results: The work proves that either (i) two-layer multi-head self-attention or (ii) single-layer self-attention followed by Softmax suffices to approximate any continuous sequence mapping on a compact domain to arbitrary precision—without requiring conventional feed-forward networks. This constitutes the first rigorous demonstration of universal approximation capability in pure attention architectures. The result is further extended to contextual statistical modeling. By decoupling self-attention from mandatory feed-forward layers, the study challenges the standard Transformer paradigm and provides a theoretical foundation for lightweight, interpretable, feed-forward-free attention models.
📝 Abstract
We prove that with linear transformations, both (i) two-layer self-attention and (ii) one-layer self-attention followed by a softmax function are universal approximators for continuous sequence-to-sequence functions on compact domains. Our main technique is a new interpolation-based method for analyzing attention's internal mechanism. This leads to our key insight: self-attention is able to approximate a generalized version of ReLU to arbitrary precision, and hence subsumes many known universal approximators. Building on these, we show that two-layer multi-head attention alone suffices as a sequence-to-sequence universal approximator. In contrast, prior works rely on feed-forward networks to establish universal approximation in Transformers. Furthermore, we extend our techniques to show that, (softmax-)attention-only layers are capable of approximating various statistical models in-context. We believe these techniques hold independent interest.