Fast Rank Adaptive CUR via a Recycled Small Sketch

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of achieving computational efficiency, structural preservation (e.g., sparsity, non-negativity), and interpretability in large-scale matrix low-rank approximation, this paper proposes IterativeCUR—a novel algorithm grounded in randomized numerical linear algebra. IterativeCUR is the first CUR decomposition method to enable error-tolerance-driven adaptive rank selection. It avoids repeated random sampling by reusing a single small random sketch and iteratively updating the selected row/column subsets and the low-dimensional subspace. Compared to state-of-the-art methods, IterativeCUR achieves up to 4× speedup over leading CUR algorithms and up to 40× acceleration over adaptive randomized SVD—all without sacrificing approximation accuracy. This substantial improvement in scalability and practicality makes IterativeCUR particularly suitable for large-scale machine learning and scientific computing applications requiring structured, interpretable, and efficient low-rank approximations.

Technology Category

Application Category

📝 Abstract
The computation of accurate low-rank matrix approximations is central to improving the scalability of various techniques in machine learning, uncertainty quantification, and control. Traditionally, low-rank approximations are constructed using SVD-based approaches such as truncated SVD or RandomizedSVD. Although these SVD approaches -- especially RandomizedSVD -- have proven to be very computationally efficient, other low-rank approximation methods can offer even greater performance. One such approach is the CUR decomposition, which forms a low-rank approximation using direct row and column subsets of a matrix. Because CUR uses direct matrix subsets, it is also often better able to preserve native matrix structures like sparsity or non-negativity than SVD-based approaches and can facilitate data interpretation in many contexts. This paper introduces IterativeCUR, which draws on previous work in randomized numerical linear algebra to build a new algorithm that is highly competitive compared to prior work: (1) It is adaptive in the sense that it takes as an input parameter the desired tolerance, rather than an a priori guess of the numerical rank. (2) It typically runs significantly faster than both existing CUR algorithms and techniques such as RandomizedSVD, in particular when these methods are run in an adaptive rank mode. Its asymptotic complexity is $mathcal{O}(mn + (m+n)r^2 + r^3)$ for an $m imes n$ matrix of numerical rank $r$. (3) It relies on a single small sketch from the matrix that is successively downdated as the algorithm proceeds. We demonstrate through extensive experiments that IterativeCUR achieves up to $4 imes$ speed-up over state-of-the-art pivoting-on-sketch approaches with no loss of accuracy, and up to $40 imes$ speed-up over rank-adaptive randomized SVD approaches.
Problem

Research questions and friction points this paper is trying to address.

Developing adaptive CUR decomposition for low-rank matrix approximation
Improving computational efficiency over existing CUR and SVD methods
Using recycled small sketches to reduce algorithm complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive rank CUR decomposition using tolerance input
Recycles small sketch updated during algorithm execution
Achieves faster computation than pivoting and SVD methods
🔎 Similar Papers
No similar papers found.
N
Nathaniel Pritchard
Mathematical Institute at the University of Oxford
T
Taejun Park
Institute of Mathematics at EPF Lausanne
Y
Yuji Nakatsukasa
Mathematical Institute at the University of Oxford
Per-Gunnar Martinsson
Per-Gunnar Martinsson
Professor of Mathematics, University of Texas at Austin
scientific computationnumerical analysisnumerical PDEsdata analysisnumerical linear algebra