🤖 AI Summary
Existing online robust principal component analysis (OR-PCA) methods rely on manually tuned, dataset-specific explicit regularization parameters, resulting in poor generalizability and scalability. Method: This paper proposes an implicit regularization framework for OR-PCA that eliminates explicit regularizers entirely. Its core innovation is the first systematic exploitation of the inherent implicit regularization effects—inducing low-rank and sparse structures—embedded in variants of modified gradient descent, including momentum, adaptive step sizes, and projection steps. Contribution/Results: Theoretical analysis integrates online optimization and matrix decomposition to guarantee rigorous convergence. Experiments on synthetic and real-world streaming data demonstrate that our method matches or surpasses optimally tuned conventional OR-PCA, without hyperparameter tuning. It significantly enhances automation and scalability for large-scale online learning.
📝 Abstract
The performance of the standard Online Robust Principal Component Analysis (OR-PCA) technique depends on the optimum tuning of the explicit regularizers and this tuning is dataset sensitive. We aim to remove the dependency on these tuning parameters by using implicit regularization. We propose to use the implicit regularization effect of various modified gradient descents to make OR-PCA tuning free. Our method incorporates three different versions of modified gradient descent that separately but naturally encourage sparsity and low-rank structures in the data. The proposed method performs comparable or better than the tuned OR-PCA for both simulated and real-world datasets. Tuning-free ORPCA makes it more scalable for large datasets since we do not require dataset-dependent parameter tuning.