Efficient Low-Tubal-Rank Tensor Estimation via Alternating Preconditioned Gradient Descent

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Low-rank tensor estimation is crucial in high-dimensional signal processing, machine learning, and imaging science. However, conventional tensor SVD is computationally prohibitive, while existing factorization methods are highly sensitive to rank estimation—overestimation often leads to slow convergence or divergence of gradient descent. To address this, we propose Alternating Preconditioned Gradient Descent (APGD), an efficient tensor recovery algorithm that alternately updates two factor tensors under over-parameterized settings, incorporating a preconditioning term to accelerate optimization. Theoretically, APGD achieves linear convergence independent of the tensor condition number and does not require precise rank knowledge. By integrating approximate tensor SVD initialization and geometric analysis, the method significantly enhances robustness and scalability. Extensive experiments on synthetic data demonstrate that APGD attains both computational efficiency and strong robustness against rank overestimation.

Technology Category

Application Category

📝 Abstract
The problem of low-tubal-rank tensor estimation is a fundamental task with wide applications across high-dimensional signal processing, machine learning, and image science. Traditional approaches tackle such a problem by performing tensor singular value decomposition, which is computationally expensive and becomes infeasible for large-scale tensors. Recent approaches address this issue by factorizing the tensor into two smaller factor tensors and solving the resulting problem using gradient descent. However, this kind of approach requires an accurate estimate of the tensor rank, and when the rank is overestimated, the convergence of gradient descent and its variants slows down significantly or even diverges. To address this problem, we propose an Alternating Preconditioned Gradient Descent (APGD) algorithm, which accelerates convergence in the over-parameterized setting by adding a preconditioning term to the original gradient and updating these two factors alternately. Based on certain geometric assumptions on the objective function, we establish linear convergence guarantees for more general low-tubal-rank tensor estimation problems. Then we further analyze the specific cases of low-tubal-rank tensor factorization and low-tubal-rank tensor recovery. Our theoretical results show that APGD achieves linear convergence even under over-parameterization, and the convergence rate is independent of the tensor condition number. Extensive simulations on synthetic data are carried out to validate our theoretical assertions.
Problem

Research questions and friction points this paper is trying to address.

Accelerates convergence for over-parameterized low-tubal-rank tensor estimation
Addresses slow convergence when tensor rank is overestimated
Provides linear convergence guarantees independent of tensor condition number
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alternating Preconditioned Gradient Descent for tensor estimation
Preconditioning accelerates convergence in over-parameterized settings
Linear convergence independent of tensor condition number
🔎 Similar Papers
No similar papers found.
Z
Zhiyu Liu
State Key Laboratory of Robotics and Intelligent Systems, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, P.R. China, and also with the University of Chinese Academy of Sciences, Beijing 100049, China
Zhi Han
Zhi Han
SIA, CAS
Computer Vision
Yandong Tang
Yandong Tang
中国科学院沈阳自动化研究所教授
计算机视觉、图像处理、模式识别
J
Jun Fan
Department of Mathematics, Hong Kong Baptist University, Hong Kong, 999077
Y
Yao Wang
Center for Intelligent Decision-making and Machine Learning, School of Management, Xi’an Jiaotong University, Xi’an 710049, P.R. China