Tensor Completion with Nearly Linear Samples Given Weak Side Information

📅 2020-07-01
🏛️ Proceedings of the ACM on Measurement and Analysis of Computing Systems
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Tensor completion suffers from a computational-statistical gap: existing polynomial-time algorithms require $O(n^{t/2})$ samples, whereas the information-theoretic lower bound is merely $O(n)$. This work bridges the gap by demonstrating—*for the first time*—that near-linear sample complexity $O(n^{1+kappa})$ is achievable using only weak side information per mode that is non-orthogonal to the latent factors, *without requiring subspace observations*. We propose a weighted higher-order SVD framework integrated with mode-specific side information, coupled with an iterative projection algorithm and a robust initialization scheme. We establish theoretical guarantees of uniform convergence. Experiments on recommendation and neuroimaging datasets show that, even when sample size is reduced by over 90%, our method achieves 35–52% lower RMSE compared to state-of-the-art approaches, significantly outperforming existing methods.
📝 Abstract
Tensor completion exhibits an interesting computational-statistical gap in terms of the number of samples needed to perform tensor estimation. While there are only Θ(tn) degrees of freedom in a t-order tensor with n^t entries, the best known polynomial time algorithm requires O(n^t/2 ) samples in order to guarantee consistent estimation. In this paper, we show that weak side information is sufficient to reduce the sample complexity to O(n). The side information consists of a weight vector for each of the modes which is not orthogonal to any of the latent factors along that mode; this is significantly weaker than assuming noisy knowledge of the subspaces. We provide an algorithm that utilizes this side information to produce a consistent estimator with O(n^1+κ ) samples for any small constant κ > 0. We also provide experiments on both synthetic and real-world datasets that validate our theoretical insights.
Problem

Research questions and friction points this paper is trying to address.

Reducing tensor completion samples with weak side information
Overcoming computational-statistical gap in tensor estimation
Utilizing non-orthogonal weight vectors for consistent estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses weak side information for tensor completion
Reduces sample complexity to nearly linear
Provides consistent estimator with minimal samples
🔎 Similar Papers
No similar papers found.