🤖 AI Summary
This paper studies Masked Robust Principal Component Analysis (Masked RPCA): given an observation matrix $Y = L_0 + M S_0$, where $L_0$ is low-rank, $S_0$ is sparse, and $M$ is a known left-multiplying mask matrix, the goal is to jointly recover $L_0$ and $S_0$. Unlike standard RPCA, which assumes unstructured sparsity, this work addresses the realistic setting where sparse corruptions undergo linear masking. We propose the first mask-aware convex optimization formulation and introduce a novel “mask–sparsity incoherence” condition—replacing the classical sparsity–low-rank incoherence assumption. We derive sufficient conditions for exact recovery and establish stability guarantees via restricted $ell_infty$-norm analysis. Experiments demonstrate strong robustness and high-accuracy recovery across diverse mask structures, including subsampling, convolutional masks, and random projections.
📝 Abstract
Given a known matrix that is the sum of a low rank matrix and a masked sparse matrix, we wish to recover both the low rank component and the sparse component. The sparse matrix is masked in the sense that a linear transformation has been applied on its left. We propose a convex optimization problem to recover the low rank and sparse matrices, which generalizes the robust PCA framework. We provide incoherence conditions for the success of the proposed convex optimizaiton problem, adapting to the masked setting. The ``mask'' matrix can be quite general as long as a so-called restricted infinity norm condition is satisfied. Further analysis on the incoherence condition is provided and we conclude with promising numerical experiments.