Barriers for rectangular matrix multiplication

📅 2020-03-06
🏛️ Computational Complexity
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates lower bounds on the computational complexity of rectangular matrix multiplication—specifically, multiplying an $n imes n$ matrix by an $n imes n^p$ matrix—with focus on whether mainstream tensor-based approaches, particularly those built upon the Coppersmith–Winograd tensor family, can achieve $O(n^{p+1})$ time complexity. Leveraging a synthesis of tensor rank analysis, asymptotic spectrum theory, and barrier proof techniques, the paper establishes the first precise numerical barrier for this class of methods, rigorously proving their inherent inability to attain $O(n^{p+1})$. Consequently, the provably best upper bound on the dual exponent $alpha$ improves from the prior state-of-the-art $0.6250$ to $0.6218$—the tightest negative result to date. This advance sets the strongest known theoretical limitation on rectangular matrix multiplication, achieving breakthroughs both in numerical precision and theoretical generality.
📝 Abstract
We study the algorithmic problem of multiplying large matrices that are rectangular. We prove that the method that has been used to construct the fastest algorithms for rectangular matrix multiplication cannot give algorithms with complexity np+1documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$n^{p + 1}$$end{document} for n×ndocumentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$n imes n$$end{document} by n×npdocumentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$n imes n^p$$end{document} matrix multiplication. In fact, we prove a precise numerical barrier for this method. Our barrier improves the previously known barriers, both in the numerical sense, as well as in its generality. In particular, we prove that any lower bound on the dual exponent of matrix multiplication αdocumentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$alpha$$end{document} via the big Coppersmith-Winograd tensors cannot exceed 0.6218documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$0.6218$$end{document}.
Problem

Research questions and friction points this paper is trying to address.

Analyzing limitations of current rectangular matrix multiplication algorithms
Establishing improved numerical barriers for matrix multiplication complexity
Proving upper bounds for dual exponent using Coppersmith-Winograd tensors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rectangular matrix multiplication barrier analysis
Improved numerical and generality barriers
Dual exponent upper bound via Coppersmith-Winograd tensors
🔎 Similar Papers
No similar papers found.