Learnable Scaled Gradient Descent for Guaranteed Robust Tensor PCA

📅 2025-01-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost of t-SVD and the slow, complex optimization of the tensor nuclear norm (TNN) in robust tensor principal component analysis (RTPCA), this paper proposes RTPCA-SGD, a learnable scaled gradient descent method. It is the first to introduce scaled gradient descent into tensor low-rank sparse decomposition, circumventing expensive TNN computation within the t-SVD framework and rigorously establishing linear convergence. Furthermore, we design a self-supervised deep-unfolding model that enables end-to-end learning of network parameters, decoupling convergence rate from condition-number dependence. Experiments on synthetic and real-world datasets demonstrate that RTPCA-SGD achieves higher accuracy than RTPCA-TNN while significantly reducing runtime, thereby unifying theoretical guarantees with practical efficiency.

Technology Category

Application Category

📝 Abstract
Robust tensor principal component analysis (RTPCA) aims to separate the low-rank and sparse components from multi-dimensional data, making it an essential technique in the signal processing and computer vision fields. Recently emerging tensor singular value decomposition (t-SVD) has gained considerable attention for its ability to better capture the low-rank structure of tensors compared to traditional matrix SVD. However, existing methods often rely on the computationally expensive tensor nuclear norm (TNN), which limits their scalability for real-world tensors. To address this issue, we explore an efficient scaled gradient descent (SGD) approach within the t-SVD framework for the first time, and propose the RTPCA-SGD method. Theoretically, we rigorously establish the recovery guarantees of RTPCA-SGD under mild assumptions, demonstrating that with appropriate parameter selection, it achieves linear convergence to the true low-rank tensor at a constant rate, independent of the condition number. To enhance its practical applicability, we further propose a learnable self-supervised deep unfolding model, which enables effective parameter learning. Numerical experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed methods while maintaining competitive computational efficiency, especially consuming less time than RTPCA-TNN.
Problem

Research questions and friction points this paper is trying to address.

Multidimensional Data Analysis
t-SVD Technique
Big Data Computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

RTPCA-SGD
t-SVD
SGD
🔎 Similar Papers
No similar papers found.
Lanlan Feng
Lanlan Feng
University of Electronic Science and Technology of China
tensor decompositiontensor principal component analysistensor completionmachine learning
Ce Zhu
Ce Zhu
FIEEE, University of Electronic Science and Technology of China
Visual Information ProcessingVisual Coding & CommunicationsMachine Learning with Applications
Y
Yipeng Liu
School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, 611731, China
S
S. Ravishankar
Department of Computational Mathematics, Science and Engineering and the Department of Biomedical Engineering, Michigan State University (MSU), East Lansing, MI 48824, USA
Longxiu Huang
Longxiu Huang
Michigan State University
Numerical Linear AlgebraApplied Harmonic AnalysisData AnalysisApproximation Theory