🤖 AI Summary
This work addresses the lack of uncertainty quantification for recovered matrix entries in robust principal component analysis (RPCA) by proposing CP-RPCA, a novel framework that introduces conformal prediction to RPCA for the first time. CP-RPCA provides finite-sample coverage guarantees without requiring distributional assumptions and incorporates a weighted calibration mechanism to accommodate heterogeneous observation probabilities. The method supports both split and full conformal implementations and maintains robustness in the presence of missing data, outliers, or model misspecification. Experimental results demonstrate that CP-RPCA yields reliable and informative prediction intervals across diverse challenging scenarios while retaining high efficiency under correctly specified models, thereby confirming its scalability and practical utility.
📝 Abstract
Robust principal component analysis (RPCA) is a widely used technique for recovering low-rank structure from matrices with missing entries and sparse, possibly large-magnitude corruptions. Although numerous algorithms achieve accurate point estimation, they offer little guidance on the uncertainty of recovered entries, limiting their reliability in practice. In this paper, we propose conformal prediction-RPCA (CP-RPCA), a practical and distribution-free framework for uncertainty quantification in robust matrix recovery. Our proposed method supports both split and full conformal implementations and incorporates weighted calibration to handle heterogeneous observation probabilities. We provide theoretical guarantees for finite-sample coverage and demonstrate through extensive simulations that CP-RPCA delivers reliable uncertainty quantification under severe outliers, missing data and model misspecification. Empirical results show that CP-RPCA can produce informative intervals and remain competitive in efficiency when the RPCA model is well specified, making it a scalable and robust tool for uncertainty-aware matrix analysis.