Positional Attention: Expressivity and Learnability of Algorithmic Computation

📅 2024-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the capacity and mechanisms by which Transformers—relying solely on positional encodings—can execute algorithmic tasks such as sorting, counting, and arithmetic. We introduce *positional attention*, an inductive bias that explicitly models positional priors while preserving parallelism. We provide the first rigorous proof that positional-attention Transformers possess computational expressivity equivalent to universal parallel computation models (e.g., AC⁰ circuits), achieving completeness with only *O*(log *n*) layers. Theoretically, we characterize a fundamental trade-off between parameter norm and network depth in sample complexity. Empirically, our model significantly outperforms standard Transformers on position-sensitive algorithmic tasks and exhibits strong out-of-distribution generalization. By unifying inductive bias design, theoretical complexity analysis, and algorithmic benchmarking, this work establishes a new paradigm for interpretable, analyzable, and structured reasoning models.

Technology Category

Application Category

📝 Abstract
There is a growing interest in the ability of neural networks to execute algorithmic tasks (e.g., arithmetic, summary statistics, and sorting). The goal of this work is to better understand the role of attention in Transformers for algorithmic execution. Its importance for algorithmic execution has been studied theoretically and empirically using parallel computational models. Notably, many parallel algorithms communicate between processors solely using positional information. Inspired by this observation, we investigate how Transformers can execute algorithms using positional attention, where attention weights depend exclusively on positional encodings. We prove that Transformers with positional attention (positional Transformers) maintain the same expressivity of parallel computational models, incurring a logarithmic depth cost relative to the input length. We analyze their in-distribution learnability and explore how parameter norms in positional attention affect sample complexity. Our results show that positional Transformers introduce a learning trade-off: while they exhibit better theoretical dependence on parameter norms, certain tasks may require more layers, which can, in turn, increase sample complexity. Finally, we empirically explore the out-of-distribution performance of positional Transformers and find that they perform well in tasks where their underlying algorithmic solution relies on positional information.
Problem

Research questions and friction points this paper is trying to address.

Positional Attention
Transformer Models
Mathematical Tasks Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Positional Attention
Transformer Model
Logarithmic Complexity
🔎 Similar Papers
No similar papers found.