Query Efficient Structured Matrix Learning

📅 2025-07-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work studies efficient learning of the optimal approximation to an unknown matrix $A$ within structured matrix families (e.g., low-rank, sparse, banded, or linear subspaces), under a black-box model where only matrix-vector queries $x mapsto Ax$ and $x mapsto A^ op x$ are accessible. We propose a unified framework integrating matrix sketching, covering numbers, and dual-query analysis. Our main contributions are: (i) the first characterization of query complexity lower bounds for general structured families; and (ii) near-optimal algorithms matching these bounds. Specifically, for a finite family $mathcal{F}$, only $widetilde{O}(sqrt{log|mathcal{F}|})$ queries suffice; for a $q$-dimensional linear matrix family, query complexity improves from the classical $O(q)$ to $widetilde{O}(sqrt{q})$, which is tight up to logarithmic factors. This yields near-quadratic speedups over conventional approaches, establishing a new paradigm for fast matrix approximation, preconditioner learning, and differential operator identification.

Technology Category

Application Category

📝 Abstract
We study the problem of learning a structured approximation (low-rank, sparse, banded, etc.) to an unknown matrix $A$ given access to matrix-vector product (matvec) queries of the form $x ightarrow Ax$ and $x ightarrow A^Tx$. This problem is of central importance to algorithms across scientific computing and machine learning, with applications to fast multiplication and inversion for structured matrices, building preconditioners for first-order optimization, and as a model for differential operator learning. Prior work focuses on obtaining query complexity upper and lower bounds for learning specific structured matrix families that commonly arise in applications. We initiate the study of the problem in greater generality, aiming to understand the query complexity of learning approximations from general matrix families. Our main result focuses on finding a near-optimal approximation to $A$ from any finite-sized family of matrices, $mathcal{F}$. Standard results from matrix sketching show that $O(log|mathcal{F}|)$ matvec queries suffice in this setting. This bound can also be achieved, and is optimal, for vector-matrix-vector queries of the form $x,y ightarrow x^TAy$, which have been widely studied in work on rank-$1$ matrix sensing. Surprisingly, we show that, in the matvec model, it is possible to obtain a nearly quadratic improvement in complexity, to $ ilde{O}(sqrt{log|mathcal{F}|})$. Further, we prove that this bound is tight up to log-log factors.Via covering number arguments, our result extends to well-studied infinite families. As an example, we establish that a near-optimal approximation from any emph{linear matrix family} of dimension $q$ can be learned with $ ilde{O}(sqrt{q})$ matvec queries, improving on an $O(q)$ bound achievable via sketching techniques and vector-matrix-vector queries.
Problem

Research questions and friction points this paper is trying to address.

Learning structured matrix approximations efficiently
Query complexity for general matrix families
Optimal matvec queries for near-optimal approximations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Efficient structured matrix approximation learning
Near-optimal approximation from finite matrix families
Quadratic improvement in matvec query complexity
🔎 Similar Papers
No similar papers found.