🤖 AI Summary
This work addresses the problem of completing high-order low-rank tensors from partial observations—arising in quantum state tomography and hyperspectral imaging. We propose Preconditioned Riemannian Gradient Descent (PRGD), the first preconditioned Riemannian optimization algorithm tailored for the tensor train (TT) format. Theoretically, PRGD achieves linear convergence, overcoming the slow convergence and high computational cost inherent in conventional TT completion methods. By integrating Riemannian optimization, TT decomposition, and adaptive preconditioning on the non-convex low-rank TT manifold, PRGD accelerates convergence by over two orders of magnitude on synthetic data. In practical applications—hyperspectral image completion and quantum state tomography—it substantially reduces iteration counts and improves real-world efficiency. This work establishes a new paradigm for structured tensor completion that simultaneously delivers rigorous theoretical guarantees and superior empirical performance.
📝 Abstract
Low-rank tensor completion aims to recover a tensor from partially observed entries, and it is widely applicable in fields such as quantum computing and image processing. Due to the significant advantages of the tensor train (TT) format in handling structured high-order tensors, this paper investigates the low-rank tensor completion problem based on the TT-format. We proposed a preconditioned Riemannian gradient descent algorithm (PRGD) to solve low TT-rank tensor completion and establish its linear convergence. Experimental results on both simulated and real datasets demonstrate the effectiveness of the PRGD algorithm. On the simulated dataset, the PRGD algorithm reduced the computation time by two orders of magnitude compared to existing classical algorithms. In practical applications such as hyperspectral image completion and quantum state tomography, the PRGD algorithm significantly reduced the number of iterations, thereby substantially reducing the computational time.