🤖 AI Summary
This paper addresses privacy-sensitive federated matrix completion, aiming to achieve efficient and secure missing-value prediction in multi-client distributed settings. To overcome the challenges of jointly optimizing nonconvex, nonsmooth multi-block objectives while ensuring communication efficiency and rigorous privacy protection, we propose FedMC-ADMM—a novel framework integrating linearized ADMM, randomized block coordinate updates, and alternating proximal gradient steps. To the best of our knowledge, FedMC-ADMM is the first method to establish an $O(varepsilon^{-2})$ communication complexity bound for multi-block federated matrix completion. Extensive experiments on MovieLens 1M/10M and Netflix datasets demonstrate that FedMC-ADMM achieves faster convergence and significantly higher test accuracy compared to state-of-the-art baselines, while preserving data privacy through local computation and secure aggregation.
📝 Abstract
Matrix completion is fundamental for predicting missing data with a wide range of applications in personalized healthcare, e-commerce, recommendation systems, and social network analysis. Traditional matrix completion approaches typically assume centralized data storage, which raises challenges in terms of computational efficiency, scalability, and user privacy. In this paper, we address the problem of federated matrix completion, focusing on scenarios where user-specific data is distributed across multiple clients, and privacy constraints are uncompromising. Federated learning provides a promising framework to address these challenges by enabling collaborative learning across distributed datasets without sharing raw data. We propose exttt{FedMC-ADMM} for solving federated matrix completion problems, a novel algorithmic framework that combines the Alternating Direction Method of Multipliers with a randomized block-coordinate strategy and alternating proximal gradient steps. Unlike existing federated approaches, exttt{FedMC-ADMM} effectively handles multi-block nonconvex and nonsmooth optimization problems, allowing efficient computation while preserving user privacy. We analyze the theoretical properties of our algorithm, demonstrating subsequential convergence and establishing a convergence rate of $mathcal{O}(K^{-1/2})$, leading to a communication complexity of $mathcal{O}(epsilon^{-2})$ for reaching an $epsilon$-stationary point. This work is the first to establish these theoretical guarantees for federated matrix completion in the presence of multi-block variables. To validate our approach, we conduct extensive experiments on real-world datasets, including MovieLens 1M, 10M, and Netflix. The results demonstrate that exttt{FedMC-ADMM} outperforms existing methods in terms of convergence speed and testing accuracy.