Strassen Attention: Unlocking Compositional Abilities in Transformers Based on a New Lower Bound Method

📅 2025-01-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes fundamental limitations of single-layer Softmax Transformers on compositional reasoning tasks—including ternary matching, function composition, and binary relation composition—and provides the first rigorous theoretical lower bound proving their inexpressibility. To overcome these limitations, we propose Strassen Attention: a novel attention mechanism inspired by Strassen’s matrix multiplication algorithm, which theoretically enables exact solution of all three advanced reasoning tasks within a single Transformer layer while achieving sub-cubic time complexity—O(n^{2.81}). Experiments on Match3 and function/relation composition benchmarks demonstrate that Strassen Attention significantly outperforms standard, higher-order, and triangular attention variants. This is the first work to integrate fast matrix multiplication into attention design, simultaneously ensuring theoretical solvability and improving computational scalability.

Technology Category

Application Category

📝 Abstract
We propose a novel method to evaluate the theoretical limits of Transformers, allowing us to prove the first lower bounds against one-layer softmax Transformers with infinite precision. We establish those bounds for three tasks that require advanced reasoning. The first task, Match3 (Sanford et al., 2023), requires looking at all triples of positions. The second and third tasks address compositionality-based reasoning: one is composition of functions (Peng et al., 2024) and the other is composition of binary relations. We formally prove the inability of one-layer softmax Transformers to solve any of these tasks. In an attempt to overcome these limitations, we introduce Strassen attention and prove that with this mechanism a one-layer Transformer can in principle solve all these tasks. We also show that it enjoys sub-cubic running-time complexity, making it more scalable than similar previously proposed mechanisms, such as higher-order attention (Sanford et al., 2023). To complement our theoretical findings, we experimentally studied Strassen attention and compared it against standard (Vaswani et al, 2017), higher-order attention (Sanford et al., 2023) and triangular attention (Bergen et al. 2021). Our results help to disentangle all these attention mechanisms, highlighting their strengths and limitations. In particular, Strassen attention outperforms standard attention significantly on all the tasks. Altogether, understanding the theoretical limitations can guide research towards scalable attention mechanisms that improve the reasoning abilities of Transformers.
Problem

Research questions and friction points this paper is trying to address.

Transformer limitations
Advanced reasoning tasks
Attention mechanism enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Strassen Attention
Single-layer Transformer
Advanced Reasoning Tasks
🔎 Similar Papers
No similar papers found.
Alexander Kozachinskiy
Alexander Kozachinskiy
Postdoc, CENIA Chile
Theoretical Computer Science
Felipe Urrutia
Felipe Urrutia
CENIA
natural language processingexplainability
H
Hector Jimenez
National Center for Artificial Intelligence (CENIA Chile), University of Chile
Tomasz Steifer
Tomasz Steifer
Polish Academy of Sciences
machine learning & AI theory
Germán Pizarro
Germán Pizarro
Investigador
M
Matías Fuentes
Institute for Mathematical and Computational Engineering, Pontifical Catholic University of Chile
Francisco Meza
Francisco Meza
Institute for Mathematical and Computational Engineering, Pontifical Catholic University of Chile
C
Cristian Buc
National Center for Artificial Intelligence (CENIA Chile)
C
Cristóbal Rojas
National Center for Artificial Intelligence (CENIA Chile), Institute for Mathematical and Computational Engineering, Pontifical Catholic University of Chile