Fast One-Pass Sparse Approximation of the Top Eigenvectors of Huge Low-Rank Matrices? Yes, $MAM^*$!

📅 2025-07-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For ultra-large-scale low-rank matrices—containing up to $10^{16}$ entries—that cannot be fully loaded into memory, this paper addresses the challenge of computing sparse approximations of dominant eigenvectors under strict memory constraints. We propose a single-pass, sublinear-memory algorithm that integrates linear sketching with compressed sensing. Leveraging the matrix’s low-rank prior, our method designs compact linear sketches and recovers top eigenvectors via sparse reconstruction—achieving computational complexity dependent solely on the target sparsity level, not the ambient matrix dimensions. Experimental results demonstrate high approximation accuracy alongside substantial reductions in both time and space complexity. To the best of our knowledge, this is the first approach enabling efficient, scalable sparse principal component approximation for exascale matrices.

Technology Category

Application Category

📝 Abstract
Motivated by applications such as sparse PCA, in this paper we present provably-accurate one-pass algorithms for the sparse approximation of the top eigenvectors of extremely massive matrices based on a single compact linear sketch. The resulting compressive-sensing-based approaches can approximate the leading eigenvectors of huge approximately low-rank matrices that are too large to store in memory based on a single pass over its entries while utilizing a total memory footprint on the order of the much smaller desired sparse eigenvector approximations. Finally, the compressive sensing recovery algorithm itself (which takes the gathered compressive matrix measurements as input, and then outputs sparse approximations of its top eigenvectors) can also be formulated to run in a time which principally depends on the size of the sought sparse approximations, making its runtime sublinear in the size of the large matrix whose eigenvectors one aims to approximate. Preliminary experiments on huge matrices having $sim 10^{16}$ entries illustrate the developed theory and demonstrate the practical potential of the proposed approach.
Problem

Research questions and friction points this paper is trying to address.

Efficient sparse approximation of top eigenvectors
One-pass algorithm for massive low-rank matrices
Sublinear runtime for huge matrix eigenvector approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

One-pass algorithm for sparse eigenvector approximation
Compressive sensing with compact linear sketch
Sublinear runtime dependent on approximation size
🔎 Similar Papers
No similar papers found.