🤖 AI Summary
To address the slow convergence of the classical multiplicative update (MU) algorithm for nonnegative matrix factorization (NMF), this paper proposes the fast multiplicative update (fastMU) algorithm. Methodologically, fastMU reformulates MU as an alternating majorization-minimization (MM) procedure—its first such interpretation—and constructs tighter upper bounds on the Hessian for each subproblem, thereby accelerating optimization under both the Frobenius norm (quadratic loss) and the generalized β-divergence. Theoretical analysis guarantees monotonic objective descent and nonnegativity preservation throughout iterations. Empirically, fastMU achieves speedups of several orders of magnitude over standard MU on both synthetic and real-world datasets, while matching the state-of-the-art performance under Frobenius loss. This work thus unifies theoretical rigor with practical efficiency in NMF optimization.
📝 Abstract
Nonnegative Matrix Factorization is an important tool in unsupervised machine learning to decompose a data matrix into a product of parts that are often interpretable. Many algorithms have been proposed during the last three decades. A well-known method is the Multiplicative Updates algorithm proposed by Lee and Seung in 2002. Multiplicative updates have many interesting features: they are simple to implement and can be adapted to popular variants such as sparse Nonnegative Matrix Factorization, and, according to recent benchmarks, is state-of-the-art for many problems where the loss function is not the Frobenius norm. In this manuscript, we propose to improve the Multiplicative Updates algorithm seen as an alternating majorization minimization algorithm by crafting a tighter upper bound of the Hessian matrix for each alternate subproblem. Convergence is still ensured and we observe in practice on both synthetic and real world dataset that the proposed fastMU algorithm is often several orders of magnitude faster than the regular Multiplicative Updates algorithm, and can even be competitive with state-of-the-art methods for the Frobenius loss.