🤖 AI Summary
This work addresses the challenge in large-scale sparse tensor completion (TC) where conventional low-rank decomposition methods struggle to simultaneously achieve computational efficiency and preserve algebraic structure. We propose a structured solver framework based on approximate Richardson iteration. Its core innovation is a novel *lifting* mechanism that reformulates unstructured TC regression as a structured linear system—amenable to arbitrary black-box low-rank tensor decomposition solvers (e.g., CP, Tucker, or TT)—while theoretically guaranteeing sublinear convergence. The method integrates alternating least squares with structured redirection, requiring no modifications to underlying tensor decomposition implementations. Experiments on real-world datasets demonstrate that our approach accelerates TC by up to 100× over standard CP-based direct methods while maintaining comparable accuracy. To our knowledge, it is the first method enabling real-time, high-accuracy completion of ultra-large-scale tensors.
📝 Abstract
We study tensor completion (TC) through the lens of low-rank tensor decomposition (TD). Many TD algorithms use fast alternating minimization methods, which solve highly structured linear regression problems at each step (e.g., for CP, Tucker, and tensor-train decompositions). However, such algebraic structure is lost in TC regression problems, making direct extensions unclear. To address this, we propose a lifting approach that approximately solves TC regression problems using structured TD regression algorithms as blackbox subroutines, enabling sublinear-time methods. We theoretically analyze the convergence rate of our approximate Richardson iteration based algorithm, and we demonstrate on real-world tensors that its running time can be 100x faster than direct methods for CP completion.