Even Faster Kernel Matrix Linear Algebra via Density Estimation

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key linear algebraic tasks—matrix-vector multiplication, matrix-matrix multiplication, spectral norm estimation, and all-entries summation—on $n imes n$ kernel matrices. We propose a unified approximation framework based on kernel density estimation (KDE), achieving $(1+varepsilon)$-relative error guarantees. Our core insight is to reduce each matrix operation to a set of KDE queries, thereby circumventing the standard $O(n^2)$ time barrier and attaining complexities nearly matching the optimal KDE runtime—for instance, $widetilde{O}(n/varepsilon^2)$ for all-entries summation. Theoretically, we establish the first conditional quadratic-time lower bounds for multiple kernel matrix problems and prove that our KDE-based paradigm achieves tight trade-offs between accuracy and efficiency. Empirically, our method significantly outperforms existing acceleration techniques on high-dimensional datasets.

Technology Category

Application Category

📝 Abstract
This paper studies the use of kernel density estimation (KDE) for linear algebraic tasks involving the kernel matrix of a collection of $n$ data points in $mathbb R^d$. In particular, we improve upon existing algorithms for computing the following up to $(1+varepsilon)$ relative error: matrix-vector products, matrix-matrix products, the spectral norm, and sum of all entries. The runtimes of our algorithms depend on the dimension $d$, the number of points $n$, and the target error $varepsilon$. Importantly, the dependence on $n$ in each case is far lower when accessing the kernel matrix through KDE queries as opposed to reading individual entries. Our improvements over existing best algorithms (particularly those of Backurs, Indyk, Musco, and Wagner '21) for these tasks reduce the polynomial dependence on $varepsilon$, and additionally decreases the dependence on $n$ in the case of computing the sum of all entries of the kernel matrix. We complement our upper bounds with several lower bounds for related problems, which provide (conditional) quadratic time hardness results and additionally hint at the limits of KDE based approaches for the problems we study.
Problem

Research questions and friction points this paper is trying to address.

Accelerating kernel matrix linear algebra operations via density estimation
Improving runtime efficiency for matrix products and spectral norms
Reducing polynomial dependence on error and data size parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses kernel density estimation for matrix operations
Improves runtime by reducing polynomial dependence on error
Decreases dependence on data points for kernel sums