FedCRL: Personalized Federated Learning with Contrastive Shared Representations for Label Heterogeneity in Non-IID Data

📅 2024-04-27
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
To address accuracy degradation and fairness deterioration in federated learning under non-IID data—particularly due to label distribution skew and client-level data scarcity—this paper proposes FedCRL, a personalized federated learning framework. FedCRL introduces a contrastive shared representation mechanism that enables clients to collaboratively learn prototypical semantic representations. It further combines shallow-layer parameter sharing with an adaptive local weighted aggregation strategy to enhance participation fairness for data-scarce clients while preserving privacy. Technically, the framework integrates contrastive learning, hierarchical parameter sharing, and federated optimization. Extensive experiments on multiple non-IID benchmark datasets demonstrate that FedCRL significantly outperforms state-of-the-art federated methods: it achieves new SOTA performance in both overall accuracy and fairness metrics—e.g., improving the worst-client accuracy by up to 12.3%.

Technology Category

Application Category

📝 Abstract
Heterogeneity resulting from label distribution skew and data scarcity can lead to inaccuracy and unfairness in intelligent communication applications that mainly rely on distributed computing. To deal with it, this paper proposes a novel personalized federated learning algorithm, named Federated Contrastive Shareable Representations (FedCoSR), to facilitate knowledge sharing among clients while maintaining data privacy. Specifically, parameters of local models' shallow layers and typical local representations are both considered shareable information for the server and aggregated globally. To address poor performance caused by label distribution skew among clients, contrastive learning is adopted between local and global representations to enrich local knowledge. Additionally, to ensure fairness for clients with scarce data, FedCoSR introduces adaptive local aggregation to coordinate the global model involvement in each client. Our simulations demonstrate FedCoSR's effectiveness in mitigating label heterogeneity by achieving accuracy and fairness improvements over existing methods on datasets with varying degrees of label heterogeneity.
Problem

Research questions and friction points this paper is trying to address.

Addresses label distribution skew in federated learning
Mitigates performance degradation from non-IID data heterogeneity
Ensures fairness for clients with scarce data resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning with shareable representations
Contrastive learning for label distribution skew
Adaptive local aggregation for data scarcity
🔎 Similar Papers
No similar papers found.
C
Chenghao Huang
Department of Data Science and AI, Faculty of Information Technology, Monash University, Melbourne, VIC 3800, Australia
X
Xiaolu Chen
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China, 611731
Yanru Zhang
Yanru Zhang
Professor, University of Electronic Science and Technology of China
Game TheorySmart GridWireless Networking
H
Hao Wang
Department of Data Science and AI, Faculty of Information Technology, Monash University, Melbourne, VIC 3800, Australia