FedSPD: A Soft-clustering Approach for Personalized Decentralized Federated Learning

📅 2024-10-24
🏛️ Conference on Uncertainty in Artificial Intelligence
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized federated learning, heterogeneous client data distributions lead to insufficient personalization and high communication overhead. Method: This paper proposes the first soft-clustering-based decentralized personalized learning framework. It establishes cross-client data cluster consensus via distributed soft clustering, enabling collaborative optimization of personalized model aggregation strategies in a serverless peer-to-peer network, while supporting selective model updates and guaranteed consensus convergence under low connectivity. Contribution/Results: Compared with existing decentralized personalized approaches, our method achieves significant accuracy gains—especially in sparse-connectivity scenarios—on real-world datasets, while reducing communication costs by 30%–50%. The framework thus jointly improves personalization performance and communication efficiency.

Technology Category

Application Category

📝 Abstract
Federated learning has recently gained popularity as a framework for distributed clients to collaboratively train a machine learning model using local data. While traditional federated learning relies on a central server for model aggregation, recent advancements adopt a decentralized framework, enabling direct model exchange between clients and eliminating the single point of failure. However, existing decentralized frameworks often assume all clients train a shared model. Personalizing each client's model can enhance performance, especially with heterogeneous client data distributions. We propose FedSPD, an efficient personalized federated learning algorithm for the decentralized setting, and show that it learns accurate models even in low-connectivity networks. To provide theoretical guarantees on convergence, we introduce a clustering-based framework that enables consensus on models for distinct data clusters while personalizing to unique mixtures of these clusters at different clients. This flexibility, allowing selective model updates based on data distribution, substantially reduces communication costs compared to prior work on personalized federated learning in decentralized settings. Experimental results on real-world datasets show that FedSPD outperforms multiple decentralized variants of personalized federated learning algorithms, especially in scenarios with low-connectivity networks.
Problem

Research questions and friction points this paper is trying to address.

Personalizing models for clients with heterogeneous data distributions
Reducing communication costs in decentralized federated learning
Ensuring model accuracy in low-connectivity network environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Soft-clustering personalized decentralized federated learning
Consensus on models for distinct data clusters
Selective model updates reducing communication costs
🔎 Similar Papers
No similar papers found.