🤖 AI Summary
This work investigates lower bounds on the computational complexity of rectangular matrix multiplication—specifically, multiplying an $n imes n$ matrix by an $n imes n^p$ matrix—with focus on whether mainstream tensor-based approaches, particularly those built upon the Coppersmith–Winograd tensor family, can achieve $O(n^{p+1})$ time complexity. Leveraging a synthesis of tensor rank analysis, asymptotic spectrum theory, and barrier proof techniques, the paper establishes the first precise numerical barrier for this class of methods, rigorously proving their inherent inability to attain $O(n^{p+1})$. Consequently, the provably best upper bound on the dual exponent $alpha$ improves from the prior state-of-the-art $0.6250$ to $0.6218$—the tightest negative result to date. This advance sets the strongest known theoretical limitation on rectangular matrix multiplication, achieving breakthroughs both in numerical precision and theoretical generality.
📝 Abstract
We study the algorithmic problem of multiplying large matrices that are rectangular. We prove that the method that has been used to construct the fastest algorithms for rectangular matrix multiplication cannot give algorithms with complexity np+1documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$n^{p + 1}$$end{document} for n×ndocumentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$n imes n$$end{document} by n×npdocumentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$n imes n^p$$end{document} matrix multiplication. In fact, we prove a precise numerical barrier for this method. Our barrier improves the previously known barriers, both in the numerical sense, as well as in its generality. In particular, we prove that any lower bound on the dual exponent of matrix multiplication αdocumentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$alpha$$end{document} via the big Coppersmith-Winograd tensors cannot exceed 0.6218documentclass[12pt]{minimal} usepackage{amsmath} usepackage{wasysym} usepackage{amsfonts} usepackage{amssymb} usepackage{amsbsy} usepackage{mathrsfs} usepackage{upgreek} setlength{oddsidemargin}{-69pt} egin{document}$$0.6218$$end{document}.