🤖 AI Summary
This work targets structured square matrices of orders 2–5—including general, upper/lower triangular, symmetric, skew-symmetric, and their transpose products—and aims to reduce the multiplicative constant in the asymptotic complexity of matrix multiplication via explicit low-rank bilinear non-commutative schemes.
Method: We introduce a novel tensor decomposition approach based on flip-graph search over finite fields (𝔽₂/𝔽₃), enabling systematic discovery of rational-field schemes requiring inversion of 2 (i.e., involving 2⁻¹), thereby overcoming limitations of AlphaTensor.
Contribution/Results: We obtain a rank-34 scheme for 4×4 matrix multiplication—the first such explicit scheme—reducing the multiplicative constant for general matrix × its transpose from 0.634 to 0.615, and for upper-triangular × general matrix from 0.615 to 0.595. We also establish new optimal records: rank-5 for 2×2 symmetric × symmetric multiplication, and rank-14 for 3×3 skew-symmetric × general multiplication.
📝 Abstract
We give explicit low-rank bilinear non-commutative schemes for multiplying structured $n imes n$ matrices with $2 leq n leq 5$, which serve as building blocks for recursive algorithms with improved multiplicative factors in asymptotic complexity. Our schemes are discovered over $mathbb{F}_2$ or $mathbb{F}_3$ and lifted to $mathbb{Z}$ or $mathbb{Q}$. Using a flip graph search over tensor decompositions, we derive schemes for general, upper-triangular, lower-triangular, symmetric, and skew-symmetric inputs, as well as products of a structured matrix with its transpose. In particular, we obtain $4 imes 4$ rank-34 schemes: (i) multiplying a general matrix by its transpose using 10 recursive calls, improving the factor from 26/41 (0.634) to 8/13 (0.615); and (ii) multiplying an upper-triangular matrix by a general matrix using 12 recursive calls, improving the factor from 8/13 (0.615) to 22/37 (0.595). Additionally, using $mathbb{F}_3$ flip graphs, we discover schemes over $mathbb{Q}$ that fundamentally require the inverse of 2, including a $2 imes 2$ symmetric-symmetric multiplication of rank 5 and a $3 imes 3$ skew-symmetric-general multiplication of rank 14 (improving upon AlphaTensor's 15).