Output-Sparse Matrix Multiplication Using Compressed Sensing

📅 2025-08-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies the Output-Sparse Matrix Multiplication (OSMM) problem: given $n imes n$ matrices $A$ and $B$ over an arbitrary ring, whose product $AB$ contains at most $O(n^delta)$ nonzero entries for $delta in [0,2]$, compute $AB$ efficiently. We propose two new algorithms: (1) the first deterministic OSMM algorithm, based on two-stage compressed sensing and optimized rectangular matrix multiplication, achieving time complexity $ ilde{O}(n^{omega(delta/2,1,1)})$; and (2) a concise randomized algorithm integrating compressed sensing, random hashing, and matrix multiplication verification, attaining $ ilde{O}(n^{omega(delta-1,1,1)})$, matching the SODA 2024 optimal bound. Both algorithms naturally handle sparse inputs. Moreover, we establish the first tight lower bound for OSMM, proving theoretical optimality of our upper bounds.

Technology Category

Application Category

📝 Abstract
We give two algorithms for output-sparse matrix multiplication (OSMM), the problem of multiplying two $n imes n$ matrices $A, B$ when their product $AB$ is promised to have at most $O(n^δ)$ many non-zero entries for a given value $δin [0, 2]$. We then show how to speed up these algorithms in the fully sparse setting, where the input matrices $A, B$ are themselves sparse. All of our algorithms work over arbitrary rings. Our first, deterministic algorithm for OSMM works via a two-pass reduction to compressed sensing. It runs in roughly $n^{ω(δ/2, 1, 1)}$ time, where $ω(cdot, cdot, cdot)$ is the rectangular matrix multiplication exponent. This substantially improves on prior deterministic algorithms for output-sparse matrix multiplication. Our second, randomized algorithm for OSMM works via a reduction to compressed sensing and a variant of matrix multiplication verification, and runs in roughly $n^{ω(δ- 1, 1, 1)}$ time. This algorithm and its extension to the fully sparse setting have running times that match those of the (randomized) algorithms for OSMM and FSMM, respectively, in recent work of Abboud, Bringmann, Fischer, and Künnemann (SODA, 2024). Our algorithm uses different techniques and is arguably simpler. Finally, we observe that the running time of our randomized algorithm and the algorithm of Abboud et al. are optimal via a simple reduction from rectangular matrix multiplication.
Problem

Research questions and friction points this paper is trying to address.

Multiply sparse matrices efficiently with sparse output
Improve deterministic algorithms using compressed sensing
Optimize randomized algorithms for sparse matrix multiplication
Innovation

Methods, ideas, or system contributions that make the work stand out.

Output-sparse matrix multiplication via compressed sensing
Deterministic algorithm with two-pass reduction
Randomized algorithm with simplified techniques
🔎 Similar Papers
No similar papers found.