Efficient Algorithms for Regularized Nonnegative Scale-invariant Low-rank Approximation Models

📅 2024-03-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of difficult regularization parameter tuning, slow convergence, and poor interpretability in sparse nonnegative matrix/tensor decomposition, this paper proposes a homogeneous scale-invariant low-rank approximation model. We rigorously prove for the first time that scale invariance induces an implicit ℓₚ norm regularization effect, thereby enhancing solution balance and interpretability. Building upon this insight, we develop a general Majorization-Minimization (MM) optimization framework that unifies support for non-Euclidean loss functions and explicit ℓₚ regularization, while providing theoretical convergence guarantees and intrinsic hyperparameter self-guidance. Extensive experiments on sparse NMF, ridge-regularized CP decomposition, and sparse Tucker decomposition demonstrate that our method significantly accelerates convergence (average speedup of 2.1×), improves model stability, and enhances both factor sparsity and semantic interpretability.

Technology Category

Application Category

📝 Abstract
Regularized nonnegative low-rank approximations, such as sparse Nonnegative Matrix Factorization or sparse Nonnegative Tucker Decomposition, form an important branch of dimensionality reduction models known for their enhanced interpretability. From a practical perspective, however, selecting appropriate regularizers and regularization coefficients, as well as designing efficient algorithms, remains challenging due to the multifactor nature of these models and the limited theoretical guidance available. This paper addresses these challenges by studying a more general model, the Homogeneous Regularized Scale-Invariant model. We prove that the scale-invariance inherent to low-rank approximation models induces an implicit regularization effect that balances solutions. This insight provides a deeper understanding of the role of regularization functions in low-rank approximation models, informs the selection of regularization hyperparameters, and enables the design of balancing strategies to accelerate the empirical convergence of optimization algorithms. Additionally, we propose a generic Majorization-Minimization (MM) algorithm capable of handling $ell_p^p$-regularized nonnegative low-rank approximations with non-Euclidean loss functions, with convergence guarantees. Our contributions are demonstrated on sparse Nonnegative Matrix Factorization, ridge-regularized Nonnegative Canonical Polyadic Decomposition, and sparse Nonnegative Tucker Decomposition.
Problem

Research questions and friction points this paper is trying to address.

Sparse Non-negative Matrix/Tucker Decomposition
Dimensionality Reduction
Algorithm Optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Homogeneous Regularization Scale Invariant Model
Majorization-Minimization Algorithm
Automatic Regularization Parameter Adjustment
🔎 Similar Papers