๐ค AI Summary
Low-rank regularization (LRR) optimization is hindered by the non-convexity and discontinuity of the rank function, as well as the non-differentiability and high computational cost of singular value decomposition (SVD). This paper proposes an efficient, fully differentiable, SVD-free generalized low-rank regularization framework that unifies nuclear norm, Schatten-*p* norm, and various non-convex relaxations. Our method leverages matrix power series expansion coupled with random projection to yield a differentiable rank estimator, backed by theoretical convergence guaranteesโbias and variance decay rapidly with sample size and iteration count. Implemented via GPU-friendly tensor operations, it seamlessly integrates with arbitrary gradient-based optimizers. Empirical evaluation on matrix completion, multi-task learning, and neural network compression demonstrates 3โ8ร speedup over SVD-based approaches, while matching or surpassing their accuracy.
๐ Abstract
Low-rank regularization (LRR) has been widely applied in various machine learning tasks, but the associated optimization is challenging. Directly optimizing the rank function under constraints is NP-hard in general. To overcome this difficulty, various relaxations of the rank function were studied. However, optimization of these relaxed LRRs typically depends on singular value decomposition, which is a time-consuming and nondifferentiable operator that cannot be optimized with gradient-based techniques. To address these challenges, in this paper we propose an efficient differentiable approximation of the generalized LRR. The considered LRR form subsumes many popular choices like the nuclear norm, the Schatten-$p$ norm, and various nonconvex relaxations. Our method enables LRR terms to be appended to loss functions in a plug-and-play fashion, and the GPU-friendly operations enable efficient and convenient implementation. Furthermore, convergence analysis is presented, which rigorously shows that both the bias and the variance of our rank estimator rapidly reduce with increased sample size and iteration steps. In the experimental study, the proposed method is applied to various tasks, which demonstrates its versatility and efficiency. Code is available at https://github.com/naiqili/EDLRR.