A fast Multiplicative Updates algorithm for Non-negative Matrix Factorization

📅 2023-03-31
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow convergence of the classical multiplicative update (MU) algorithm for nonnegative matrix factorization (NMF), this paper proposes the fast multiplicative update (fastMU) algorithm. Methodologically, fastMU reformulates MU as an alternating majorization-minimization (MM) procedure—its first such interpretation—and constructs tighter upper bounds on the Hessian for each subproblem, thereby accelerating optimization under both the Frobenius norm (quadratic loss) and the generalized β-divergence. Theoretical analysis guarantees monotonic objective descent and nonnegativity preservation throughout iterations. Empirically, fastMU achieves speedups of several orders of magnitude over standard MU on both synthetic and real-world datasets, while matching the state-of-the-art performance under Frobenius loss. This work thus unifies theoretical rigor with practical efficiency in NMF optimization.
📝 Abstract
Nonnegative Matrix Factorization is an important tool in unsupervised machine learning to decompose a data matrix into a product of parts that are often interpretable. Many algorithms have been proposed during the last three decades. A well-known method is the Multiplicative Updates algorithm proposed by Lee and Seung in 2002. Multiplicative updates have many interesting features: they are simple to implement and can be adapted to popular variants such as sparse Nonnegative Matrix Factorization, and, according to recent benchmarks, is state-of-the-art for many problems where the loss function is not the Frobenius norm. In this manuscript, we propose to improve the Multiplicative Updates algorithm seen as an alternating majorization minimization algorithm by crafting a tighter upper bound of the Hessian matrix for each alternate subproblem. Convergence is still ensured and we observe in practice on both synthetic and real world dataset that the proposed fastMU algorithm is often several orders of magnitude faster than the regular Multiplicative Updates algorithm, and can even be competitive with state-of-the-art methods for the Frobenius loss.
Problem

Research questions and friction points this paper is trying to address.

Develops second-order optimization for NMF under quadratic and β-divergence losses
Proposes mSOM algorithm for faster convergence via tighter local approximation
Demonstrates mSOM outperforms state-of-the-art NMF algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Second-order optimization for NMF
Local quadratic majorization technique
mSOM algorithm accelerates convergence
🔎 Similar Papers
No similar papers found.
M
Mai-Quyen Pham
IMT Atlantique; UMR CNRS 6285 Lab-STICC
J
Jérémy E. Cohen
Univ Lyon, INSA-Lyon, UCBL, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69100 Villeurbanne, France
T
Thierry Chonavel
IMT Atlantique; UMR CNRS 6285 Lab-STICC