🤖 AI Summary
This work addresses the challenge that nonnegative sparse matrices often lack intrinsic low-rank structure, rendering conventional low-rank models ineffective. To overcome this, we propose a nonlinear matrix factorization framework leveraging the inherent low-rank property of the ReLU activation function. The framework jointly enforces low-rankness, sparsity, and nonnegativity. We design an Accelerated Alternating Partial Bregman (AAPB) algorithm capable of simultaneously updating multiple variables, and—crucially—establish for the first time sublinear and global convergence guarantees without assuming globally Lipschitz continuous gradients. Theoretically, our analysis integrates Bregman proximal gradient methods, adaptive kernel-induced distances, and nonconvex nonsmooth optimization. Algorithmically, closed-form updates are derived to enhance computational efficiency. Experiments demonstrate that our method significantly outperforms state-of-the-art approaches on graph-regularized clustering and sparse NMF basis compression, achieving both superior model expressiveness and high computational efficiency.
📝 Abstract
Despite the remarkable success of low-rank estimation in data mining, its effectiveness diminishes when applied to data that inherently lacks low-rank structure. To address this limitation, in this paper, we focus on non-negative sparse matrices and aim to investigate the intrinsic low-rank characteristics of the rectified linear unit (ReLU) activation function. We first propose a novel nonlinear matrix decomposition framework incorporating a comprehensive regularization term designed to simultaneously promote useful structures in clustering and compression tasks, such as low-rankness, sparsity, and non-negativity in the resulting factors. This formulation presents significant computational challenges due to its multi-block structure, non-convexity, non-smoothness, and the absence of global gradient Lipschitz continuity. To address these challenges, we develop an accelerated alternating partial Bregman proximal gradient method (AAPB), whose distinctive feature lies in its capability to enable simultaneous updates of multiple variables. Under mild and theoretically justified assumptions, we establish both sublinear and global convergence properties of the proposed algorithm. Through careful selection of kernel generating distances tailored to various regularization terms, we derive corresponding closed-form solutions while maintaining the $L$-smooth adaptable property always holds for any $Lge 1$. Numerical experiments, on graph regularized clustering and sparse NMF basis compression confirm the effectiveness of our model and algorithm.