A Linearized Alternating Direction Multiplier Method for Federated Matrix Completion Problems

📅 2025-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses privacy-sensitive federated matrix completion, aiming to achieve efficient and secure missing-value prediction in multi-client distributed settings. To overcome the challenges of jointly optimizing nonconvex, nonsmooth multi-block objectives while ensuring communication efficiency and rigorous privacy protection, we propose FedMC-ADMM—a novel framework integrating linearized ADMM, randomized block coordinate updates, and alternating proximal gradient steps. To the best of our knowledge, FedMC-ADMM is the first method to establish an $O(varepsilon^{-2})$ communication complexity bound for multi-block federated matrix completion. Extensive experiments on MovieLens 1M/10M and Netflix datasets demonstrate that FedMC-ADMM achieves faster convergence and significantly higher test accuracy compared to state-of-the-art baselines, while preserving data privacy through local computation and secure aggregation.

Technology Category

Application Category

📝 Abstract
Matrix completion is fundamental for predicting missing data with a wide range of applications in personalized healthcare, e-commerce, recommendation systems, and social network analysis. Traditional matrix completion approaches typically assume centralized data storage, which raises challenges in terms of computational efficiency, scalability, and user privacy. In this paper, we address the problem of federated matrix completion, focusing on scenarios where user-specific data is distributed across multiple clients, and privacy constraints are uncompromising. Federated learning provides a promising framework to address these challenges by enabling collaborative learning across distributed datasets without sharing raw data. We propose exttt{FedMC-ADMM} for solving federated matrix completion problems, a novel algorithmic framework that combines the Alternating Direction Method of Multipliers with a randomized block-coordinate strategy and alternating proximal gradient steps. Unlike existing federated approaches, exttt{FedMC-ADMM} effectively handles multi-block nonconvex and nonsmooth optimization problems, allowing efficient computation while preserving user privacy. We analyze the theoretical properties of our algorithm, demonstrating subsequential convergence and establishing a convergence rate of $mathcal{O}(K^{-1/2})$, leading to a communication complexity of $mathcal{O}(epsilon^{-2})$ for reaching an $epsilon$-stationary point. This work is the first to establish these theoretical guarantees for federated matrix completion in the presence of multi-block variables. To validate our approach, we conduct extensive experiments on real-world datasets, including MovieLens 1M, 10M, and Netflix. The results demonstrate that exttt{FedMC-ADMM} outperforms existing methods in terms of convergence speed and testing accuracy.
Problem

Research questions and friction points this paper is trying to address.

Federated matrix completion with distributed user-specific data
Privacy-preserving collaborative learning without sharing raw data
Efficient computation for multi-block nonconvex and nonsmooth optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines ADMM with randomized block-coordinate strategy
Handles multi-block nonconvex nonsmooth optimization problems
Ensures user privacy with efficient computation
🔎 Similar Papers
No similar papers found.
P
Patrick Hytla
University of Dayton Research Institute, University of Dayton, 300 College Park, Dayton, 45469, Ohio, USA
T
Tran T. A. Nghia
Department of Mathematics and Statistics, Oakland University, Rochester, MI 48309, USA
Duy Nhat Phan
Duy Nhat Phan
University of Dayton Research Institute
Generative AIMachine LearningNumerical Optimization
Andrew Rice
Andrew Rice
University of Dayton Research Institute, University of Dayton, 300 College Park, Dayton, 45469, Ohio, USA