Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent

📅 2024-01-22
🏛️ IEEE Transactions on Signal Processing
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Efficiently recovering large-scale low-tubal-rank tensors from a small number of noisy linear measurements remains challenging, as existing t-SVD–based methods suffer from high computational complexity and poor scalability. Method: This paper introduces, for the first time, a Burer–Monteiro–type bi-factorization framework into low-tubal-rank tensor recovery. We propose a Factorized Gradient Descent (FGD) algorithm that operates without prior knowledge of the true tubal rank and is robust to rank overestimation. Leveraging t-product algebra, our nonconvex optimization model avoids explicit t-SVD computation. Contribution/Results: We establish theoretical convergence guarantees under noise. Experiments on multiple benchmark tasks demonstrate that FGD achieves faster convergence, lower reconstruction error, and significantly reduced computational and storage overhead compared to state-of-the-art tensor recovery methods.

Technology Category

Application Category

📝 Abstract
This paper considers the problem of recovering a tensor with an underlying low-tubal-rank structure from a small number of corrupted linear measurements. Traditional approaches tackling such a problem require the computation of tensor Singular Value Decomposition (t-SVD), which is a computationally intensive process, rendering them impractical for dealing with large-scale tensors. Aiming to address this challenge, we propose an efficient and effective low-tubal-rank tensor recovery method based on a factorization procedure akin to the Burer-Monteiro (BM) method. Precisely, our fundamental approach involves decomposing a large tensor into two smaller factor tensors, followed by solving the problem through factorized gradient descent (FGD). This strategy eliminates the need for t-SVD computation, thereby reducing computational costs and storage requirements. We provide rigorous theoretical analysis to ensure the convergence of FGD under both noise-free and noisy situations. Additionally, it is worth noting that our method does not require the precise estimation of the tensor tubal-rank. Even in cases where the tubal-rank is slightly overestimated, our approach continues to demonstrate robust performance. A series of experiments have been carried out to demonstrate that, as compared to other popular ones, our approach exhibits superior performance in multiple scenarios, in terms of the faster computational speed and the smaller convergence error.
Problem

Research questions and friction points this paper is trying to address.

Tensor Recovery
t-SVD Efficiency
Big Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Burer-Monteiro method
Factorized Gradient Descent
Tensor recovery
🔎 Similar Papers
No similar papers found.
Z
Zhiyu Liu
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, P.R. China, and also with the University of Chinese Academy of Sciences, Beijing 100049, China
Z
Zhi-Long Han
State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, P.R. China
Yandong Tang
Yandong Tang
中国科学院沈阳自动化研究所教授
计算机视觉、图像处理、模式识别
Xi-Le Zhao
Xi-Le Zhao
University of Electronic Science and Technology of China
sparse and low-rank modeling for high-dimensional data analysis
Y
Yao Wang
Center for Intelligent Decision-making and Machine Learning, School of Management, Xi’an Jiaotong University, Xi’an 710049, P.R. China