Leave-One-Out Analysis for Nonconvex Robust Matrix Completion with General Thresholding Functions

📅 2024-07-28
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses robust matrix completion (RMC) under sparse noise corruption. To overcome limitations of existing nonconvex approaches—which rely on explicit regularization or sample splitting—we propose a novel nonconvex alternating optimization algorithm that requires neither regularization nor sample splitting. Our method alternates between projection-gradient updates for low-rank estimation and generalized thresholding (e.g., soft-thresholding, SCAD) for sparse noise removal. We introduce the first leave-one-out analysis for nonconvex RMC algorithms, establishing linear convergence guarantees and significantly improving the sampling complexity bound over singular value projection-based methods. The theoretical framework accommodates a broad class of thresholding functions, and extensive experiments validate superior convergence speed and recovery accuracy. The approach thus achieves both theoretical rigor—via provable linear convergence and refined sampling bounds—and practical efficiency—through simple, scalable iterations without tuning regularization parameters.

Technology Category

Application Category

📝 Abstract
We study the problem of robust matrix completion (RMC), where the partially observed entries of an underlying low-rank matrix is corrupted by sparse noise. Existing analysis of the non-convex methods for this problem either requires the explicit but empirically redundant regularization in the algorithm or requires sample splitting in the analysis. In this paper, we consider a simple yet efficient nonconvex method which alternates between a projected gradient step for the low-rank part and a thresholding step for the sparse noise part. Inspired by leave-one out analysis for low rank matrix completion, it is established that the method can achieve linear convergence for a general class of thresholding functions, including for example soft-thresholding and SCAD. To the best of our knowledge, this is the first leave-one-out analysis on a nonconvex method for RMC. Additionally, when applying our result to low rank matrix completion, it improves the sampling complexity of existing result for the singular value projection method.
Problem

Research questions and friction points this paper is trying to address.

Robust matrix completion with sparse noise corruption
Nonconvex method analysis without sample splitting
Linear convergence for general thresholding functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Alternates gradient and thresholding steps
Uses general class of thresholding functions
First leave-one-out analysis for nonconvex RMC
🔎 Similar Papers
No similar papers found.
T
Tianming Wang
School of Mathematics, Southwestern University of Finance and Economics, Chengdu, Sichuan, China
Ke Wei
Ke Wei
Fudan University
high dimensional signal processing and data analysisreinforcement learningnonconvex optimization