🤖 AI Summary
This work addresses a fundamental limitation of traditional Robust Principal Component Analysis (RPCA), which assumes an additive model between foreground and background and thus fails to capture realistic scenarios where the foreground occludes or replaces parts of the background, leading to model mismatch. To overcome this, the authors propose a Robust Principal Component Completion (RPCC) framework that explicitly models the sparse foreground as occluding the low-rank background. They formulate a fully probabilistic Bayesian sparse tensor factorization model and employ variational Bayesian inference to directly infer a hard classification of the support set of the sparse component—eliminating the need for post-hoc thresholding. Experiments demonstrate that RPCC achieves near-optimal estimation on synthetic data and significantly improves foreground extraction and anomaly detection performance on real-world color videos and hyperspectral datasets.
📝 Abstract
Robust principal component analysis (RPCA) seeks a low-rank component and a sparse component from their summation. Yet, in many applications of interest, the sparse foreground actually replaces, or occludes, elements from the low-rank background. To address this mismatch, a new framework is proposed in which the sparse component is identified indirectly through determining its support. This approach, called robust principal component completion (RPCC), is solved via variational Bayesian inference applied to a fully probabilistic Bayesian sparse tensor factorization. Convergence to a hard classifier for the support is shown, thereby eliminating the post-hoc thresholding required of most prior RPCA-driven approaches. Experimental results reveal that the proposed approach delivers near-optimal estimates on synthetic data as well as robust foreground-extraction and anomaly-detection performance on real color video and hyperspectral datasets, respectively. Source implementation and Appendices are available at https://github.com/WongYinJ/BCP-RPCC.