PFedDST: Personalized Federated Learning with Decentralized Selection Training

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address training inefficiency and instability in distributed learning caused by non-IID data distributions, device heterogeneity, and communication bottlenecks, this paper proposes a decentralized dynamic peer selection mechanism. Each edge device autonomously selects collaboration partners based on a communication scoring function that jointly incorporates local loss, task similarity, and historical interaction frequency—enabling co-optimization of personalized modeling and global convergence. Our key innovation lies in the first explicit, computationally tractable modeling of multi-dimensional heterogeneity as a communication score, and the construction of a communication-aware dynamic collaboration graph. Extensive experiments on multiple non-IID benchmarks demonstrate that our method achieves 23%–37% faster convergence and improves average accuracy by 1.8–4.2 percentage points over FedAvg, FedProx, and pFedMe.

Technology Category

Application Category

📝 Abstract
Distributed Learning (DL) enables the training of machine learning models across multiple devices, yet it faces challenges like non-IID data distributions and device capability disparities, which can impede training efficiency. Communication bottlenecks further complicate traditional Federated Learning (FL) setups. To mitigate these issues, we introduce the Personalized Federated Learning with Decentralized Selection Training (PFedDST) framework. PFedDST enhances model training by allowing devices to strategically evaluate and select peers based on a comprehensive communication score. This score integrates loss, task similarity, and selection frequency, ensuring optimal peer connections. This selection strategy is tailored to increase local personalization and promote beneficial peer collaborations to strengthen the stability and efficiency of the training process. Our experiments demonstrate that PFedDST not only enhances model accuracy but also accelerates convergence. This approach outperforms state-of-the-art methods in handling data heterogeneity, delivering both faster and more effective training in diverse and decentralized systems.
Problem

Research questions and friction points this paper is trying to address.

Addresses non-IID data distribution challenges in distributed learning.
Reduces communication bottlenecks in federated learning setups.
Enhances training efficiency and model accuracy in decentralized systems.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized peer selection strategy
Enhanced local personalization
Improved training efficiency and convergence
🔎 Similar Papers
No similar papers found.