🤖 AI Summary
Approximate matrix multiplication is a fundamental problem in theoretical computer science. This paper unifies classical randomized algorithms—including random walks and sketching—under a mean estimation framework, yielding a novel classical approximation algorithm that avoids invoking exact matrix multiplication as a subroutine and achieves improved time complexity over prior methods. Furthermore, leveraging quantum multivariate mean estimation, we achieve the first quantum speedup for approximate matrix multiplication without relying on quantum fast matrix multiplication, thereby breaking classical lower bounds. Both theoretical analysis and empirical evaluation demonstrate significant efficiency gains on sparse and low-rank structured matrices. The proposed framework establishes a new paradigm for large-scale approximate matrix computation, balancing theoretical rigor with practical applicability.
📝 Abstract
The complexity of matrix multiplication is a central topic in computer science. While the focus has traditionally been on exact algorithms, a long line of literature also considers randomized algorithms, which return an approximate solution in faster time. In this work, we adopt a unifying perspective that frames these randomized algorithms in terms of mean estimation. Using it, we first give refined analyses of classical algorithms based on random walks by Cohen-Lewis (`99), and based on sketching by Sarl'os (`06) and Drineas-Kannan-Mahoney (`06). We then propose an improvement on Cohen-Lewis that yields a single classical algorithm that is faster than all the other approaches, if we assume no use of (exact) fast matrix multiplication as a subroutine. Second, we demonstrate a quantum speedup on top of these algorithms by using the recent quantum multivariate mean estimation algorithm by Cornelissen-Hamoudi-Jerbi (`22).