🤖 AI Summary
Efficiently recovering large-scale low-tubal-rank tensors from a small number of noisy linear measurements remains challenging, as existing t-SVD–based methods suffer from high computational complexity and poor scalability. Method: This paper introduces, for the first time, a Burer–Monteiro–type bi-factorization framework into low-tubal-rank tensor recovery. We propose a Factorized Gradient Descent (FGD) algorithm that operates without prior knowledge of the true tubal rank and is robust to rank overestimation. Leveraging t-product algebra, our nonconvex optimization model avoids explicit t-SVD computation. Contribution/Results: We establish theoretical convergence guarantees under noise. Experiments on multiple benchmark tasks demonstrate that FGD achieves faster convergence, lower reconstruction error, and significantly reduced computational and storage overhead compared to state-of-the-art tensor recovery methods.
📝 Abstract
This paper considers the problem of recovering a tensor with an underlying low-tubal-rank structure from a small number of corrupted linear measurements. Traditional approaches tackling such a problem require the computation of tensor Singular Value Decomposition (t-SVD), which is a computationally intensive process, rendering them impractical for dealing with large-scale tensors. Aiming to address this challenge, we propose an efficient and effective low-tubal-rank tensor recovery method based on a factorization procedure akin to the Burer-Monteiro (BM) method. Precisely, our fundamental approach involves decomposing a large tensor into two smaller factor tensors, followed by solving the problem through factorized gradient descent (FGD). This strategy eliminates the need for t-SVD computation, thereby reducing computational costs and storage requirements. We provide rigorous theoretical analysis to ensure the convergence of FGD under both noise-free and noisy situations. Additionally, it is worth noting that our method does not require the precise estimation of the tensor tubal-rank. Even in cases where the tubal-rank is slightly overestimated, our approach continues to demonstrate robust performance. A series of experiments have been carried out to demonstrate that, as compared to other popular ones, our approach exhibits superior performance in multiple scenarios, in terms of the faster computational speed and the smaller convergence error.