🤖 AI Summary
This work exposes fundamental limitations of single-layer Softmax Transformers on compositional reasoning tasks—including ternary matching, function composition, and binary relation composition—and provides the first rigorous theoretical lower bound proving their inexpressibility. To overcome these limitations, we propose Strassen Attention: a novel attention mechanism inspired by Strassen’s matrix multiplication algorithm, which theoretically enables exact solution of all three advanced reasoning tasks within a single Transformer layer while achieving sub-cubic time complexity—O(n^{2.81}). Experiments on Match3 and function/relation composition benchmarks demonstrate that Strassen Attention significantly outperforms standard, higher-order, and triangular attention variants. This is the first work to integrate fast matrix multiplication into attention design, simultaneously ensuring theoretical solvability and improving computational scalability.
📝 Abstract
We propose a novel method to evaluate the theoretical limits of Transformers, allowing us to prove the first lower bounds against one-layer softmax Transformers with infinite precision. We establish those bounds for three tasks that require advanced reasoning. The first task, Match3 (Sanford et al., 2023), requires looking at all triples of positions. The second and third tasks address compositionality-based reasoning: one is composition of functions (Peng et al., 2024) and the other is composition of binary relations. We formally prove the inability of one-layer softmax Transformers to solve any of these tasks. In an attempt to overcome these limitations, we introduce Strassen attention and prove that with this mechanism a one-layer Transformer can in principle solve all these tasks. We also show that it enjoys sub-cubic running-time complexity, making it more scalable than similar previously proposed mechanisms, such as higher-order attention (Sanford et al., 2023). To complement our theoretical findings, we experimentally studied Strassen attention and compared it against standard (Vaswani et al, 2017), higher-order attention (Sanford et al., 2023) and triangular attention (Bergen et al. 2021). Our results help to disentangle all these attention mechanisms, highlighting their strengths and limitations. In particular, Strassen attention outperforms standard attention significantly on all the tasks. Altogether, understanding the theoretical limitations can guide research towards scalable attention mechanisms that improve the reasoning abilities of Transformers.