🤖 AI Summary
This paper addresses the problem of fast matrix-vector multiplication for structured $n imes n$ matrices $M$ with VC-dimension $d$. Given such a matrix, the goal is to preprocess it efficiently to enable rapid responses to queries of the form $Mv$. Bridging the gap between theoretical lower bounds and practical efficiency, the paper establishes the first quantitative connection between VC-dimension and matrix-vector multiplication complexity. It proposes an algorithm with $ ilde{O}(n^2)$ preprocessing time and $ ilde{O}(n^{2-1/d})$ query time. Crucially, this method supports subquadratic queries even under adversarial low-rank perturbations and refutes several conditional lower bounds previously derived under the OMv hypothesis. As the first high-accuracy subquadratic dynamic graph algorithmic framework, it significantly outperforms the naive $O(n^2)$ baseline on real-world data with low VC-dimension, yielding new tools for fundamental problems including shortest paths and Laplacian system solving.
📝 Abstract
We consider the problem of preprocessing an $n imes n$ matrix M, and supporting queries that, for any vector v, returns the matrix-vector product Mv. This problem has been extensively studied in both theory and practice: on one side, practitioners have developed algorithms that are highly efficient in practice, whereas theoreticians have proven that the problem cannot be solved faster than naive multiplication in the worst-case. This lower bound holds even in the average-case, implying that existing average-case analyses cannot explain this gap between theory and practice. Therefore, we study the problem for structured matrices. We show that for $n imes n$ matrices of VC-dimension d, the matrix-vector multiplication problem can be solved with $ ilde{O}(n^2)$ preprocessing and $ ilde O(n^{2-1/d})$ query time. Given the low constant VC-dimensions observed in most real-world data, our results posit an explanation for why the problem can be solved so much faster in practice. Moreover, our bounds hold even if the matrix does not have a low VC-dimension, but is obtained by (possibly adversarially) corrupting at most a subquadratic number of entries of any unknown low VC-dimension matrix. Our results yield the first non-trivial upper bounds for many applications. In previous works, the online matrix-vector hypothesis (conjecturing that quadratic time is needed per query) was used to prove many conditional lower bounds, showing that it is impossible to compute and maintain high-accuracy estimates for shortest paths, Laplacian solvers, effective resistance, and triangle detection in graphs subject to node insertions and deletions in subquadratic time. Yet, via a reduction to our matrix-vector-multiplication result, we show we can maintain the aforementioned problems efficiently if the input is structured, providing the first subquadratic upper bounds in the high-accuracy regime.