🤖 AI Summary
Low-rank tensor estimation is crucial in high-dimensional signal processing, machine learning, and imaging science. However, conventional tensor SVD is computationally prohibitive, while existing factorization methods are highly sensitive to rank estimation—overestimation often leads to slow convergence or divergence of gradient descent. To address this, we propose Alternating Preconditioned Gradient Descent (APGD), an efficient tensor recovery algorithm that alternately updates two factor tensors under over-parameterized settings, incorporating a preconditioning term to accelerate optimization. Theoretically, APGD achieves linear convergence independent of the tensor condition number and does not require precise rank knowledge. By integrating approximate tensor SVD initialization and geometric analysis, the method significantly enhances robustness and scalability. Extensive experiments on synthetic data demonstrate that APGD attains both computational efficiency and strong robustness against rank overestimation.
📝 Abstract
The problem of low-tubal-rank tensor estimation is a fundamental task with wide applications across high-dimensional signal processing, machine learning, and image science. Traditional approaches tackle such a problem by performing tensor singular value decomposition, which is computationally expensive and becomes infeasible for large-scale tensors. Recent approaches address this issue by factorizing the tensor into two smaller factor tensors and solving the resulting problem using gradient descent. However, this kind of approach requires an accurate estimate of the tensor rank, and when the rank is overestimated, the convergence of gradient descent and its variants slows down significantly or even diverges. To address this problem, we propose an Alternating Preconditioned Gradient Descent (APGD) algorithm, which accelerates convergence in the over-parameterized setting by adding a preconditioning term to the original gradient and updating these two factors alternately. Based on certain geometric assumptions on the objective function, we establish linear convergence guarantees for more general low-tubal-rank tensor estimation problems. Then we further analyze the specific cases of low-tubal-rank tensor factorization and low-tubal-rank tensor recovery. Our theoretical results show that APGD achieves linear convergence even under over-parameterization, and the convergence rate is independent of the tensor condition number. Extensive simulations on synthetic data are carried out to validate our theoretical assertions.