When Do Transformers Outperform Feedforward and Recurrent Networks? A Statistical Perspective

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the sample complexity advantage of Transformers over feedforward networks (FFNs) and recurrent neural networks (RNNs) for dynamic sparse sequence modeling—tasks where output depends only on a small, unknown subset of input tokens. Method: We formalize a sparse retrieval-and-generation task and conduct a statistical learning–theoretic analysis, combining attention mechanism modeling with rigorous lower-bound proofs. Contribution/Results: We establish the first strict sample complexity hierarchy among these architectures for this task: (i) a single-layer Transformer with at least $ q $ attention heads achieves $ O(1) $ sample complexity—constant in sequence length $ N $; (ii) RNNs require $ N^{Omega(1)} $ samples; and (iii) FFNs still require $ Omega(N) $ samples. Our analysis reveals that Transformers attain fundamental statistical efficiency gains via adaptive sparse attention, providing theoretical justification for their architectural superiority in sparse dependency modeling.

Technology Category

Application Category

📝 Abstract
Theoretical efforts to prove advantages of Transformers in comparison with classical architectures such as feedforward and recurrent neural networks have mostly focused on representational power. In this work, we take an alternative perspective and prove that even with infinite compute, feedforward and recurrent networks may suffer from larger sample complexity compared to Transformers, as the latter can adapt to a form of dynamic sparsity. Specifically, we consider a sequence-to-sequence data generating model on sequences of length $N$, in which the output at each position depends only on $q$ relevant tokens with $q ll N$, and the positions of these tokens are described in the input prompt. We prove that a single-layer Transformer can learn this model if and only if its number of attention heads is at least $q$, in which case it achieves a sample complexity almost independent of $N$, while recurrent networks require $N^{Omega(1)}$ samples on the same problem. If we simplify this model, recurrent networks may achieve a complexity almost independent of $N$, while feedforward networks still require $N$ samples. Consequently, our proposed sparse retrieval model illustrates a natural hierarchy in sample complexity across these architectures.
Problem

Research questions and friction points this paper is trying to address.

Transformers outperform feedforward and recurrent networks in sample complexity.
Transformers adapt to dynamic sparsity, reducing sample complexity for sequence tasks.
Recurrent networks require more samples than Transformers for sparse sequence models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformers adapt to dynamic sparsity efficiently
Single-layer Transformers require attention heads ≥ q
Transformers achieve sample complexity independent of N
🔎 Similar Papers
No similar papers found.