🤖 AI Summary
This work addresses the failure of conventional matrix completion methods under ultra-sparse sampling regimes where the number of observations per row, denoted \( C \), falls below the matrix rank. The authors propose a novel approach that constructs a low-variance estimator of the second-moment matrix via a frequency-normalized unbiased estimator and optimizes a rank-\( r \) factor model using gradient descent to effectively recover the mean second-moment matrix \( T \) and its row space. Theoretically, the method is shown to approximate the global optimum in the neighborhood of a local solution, with sample complexity scaling linearly in the ambient dimension \( d \). Experiments demonstrate an 88% reduction in bias on MovieLens and 59% and 38% lower recovery errors for \( T \) and \( M \), respectively, on Amazon review data at a sparsity level of \( 10^{-7} \), marking the first successful estimation of the row space in the extreme setting where \( C < \text{rank} \).
📝 Abstract
Matrix completion is a classical problem that has received recurring interest across a wide range of fields. In this paper, we revisit this problem in an ultra-sparse sampling regime, where each entry of an unknown, $n\times d$ matrix $M$ (with $n \ge d$) is observed independently with probability $p = C / d$, for a fixed integer $C \ge 2$. This setting is motivated by applications involving large, sparse panel datasets, where the number of rows far exceeds the number of columns. When each row contains only $C$ entries -- fewer than the rank of $M$ -- accurate imputation of $M$ is impossible. Instead, we estimate the row span of $M$ or the averaged second-moment matrix $T = M^{\top} M / n$. The empirical second-moment matrix computed from observed entries exhibits non-random and sparse missingness. We propose an unbiased estimator that normalizes each nonzero entry of the second moment by its observed frequency, followed by gradient descent to impute the missing entries of $T$. The normalization divides a weighted sum of $n$ binomial random variables by the total number of ones. We show that the estimator is unbiased for any $p$ and enjoys low variance. When the row vectors of $M$ are drawn uniformly from a rank-$r$ factor model satisfying an incoherence condition, we prove that if $n \ge O({d r^5 \epsilon^{-2} C^{-2} \log d})$, any local minimum of the gradient-descent objective is approximately global and recovers $T$ with error at most $\epsilon^2$. Experiments on both synthetic and real-world data validate our approach. On three MovieLens datasets, our algorithm reduces bias by $88\%$ relative to baseline estimators. We also empirically validate the linear sampling complexity of $n$ relative to $d$ on synthetic data. On an Amazon reviews dataset with sparsity $10^{-7}$, our method reduces the recovery error of $T$ by $59\%$ and $M$ by $38\%$ compared to baseline methods.