CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address aggregation bias and insufficient personalization in conventional federated learning under data-heterogeneous and data-scarce scenarios—caused by uniform model averaging—this paper proposes a dynamic contribution-aware personalized federated learning framework. Methodologically, it introduces (1) a dual-subspace contribution evaluation mechanism driven by gradient direction divergence and prediction deviation, eliminating heuristic weight assumptions; and (2) a parameter-level personalization strategy coupled with mask-aware momentum optimization to enhance aggregation fairness and training stability. The framework enables fine-grained dynamic weighted aggregation and robust co-optimization of personalized submodels. Extensive experiments on CIFAR-10/10C, CINIC-10, and Mini-ImageNet demonstrate that the method significantly outperforms state-of-the-art approaches in personalized accuracy, distributional robustness, convergence stability, and scalability.

Technology Category

Application Category

📝 Abstract
Personalized federated learning (PFL) addresses a critical challenge of collaboratively training customized models for clients with heterogeneous and scarce local data. Conventional federated learning, which relies on a single consensus model, proves inadequate under such data heterogeneity. Its standard aggregation method of weighting client updates heuristically or by data volume, operates under an equal-contribution assumption, failing to account for the actual utility and reliability of each client's update. This often results in suboptimal personalization and aggregation bias. To overcome these limitations, we introduce Contribution-Oriented PFL (CO-PFL), a novel algorithm that dynamically estimates each client's contribution for global aggregation. CO-PFL performs a joint assessment by analyzing both gradient direction discrepancies and prediction deviations, leveraging information from gradient and data subspaces. This dual-subspace analysis provides a principled and discriminative aggregation weight for each client, emphasizing high-quality updates. Furthermore, to bolster personalization adaptability and optimization stability, CO-PFL cohesively integrates a parameter-wise personalization mechanism with mask-aware momentum optimization. Our approach effectively mitigates aggregation bias, strengthens global coordination, and enhances local performance by facilitating the construction of tailored submodels with stable updates. Extensive experiments on four benchmark datasets (CIFAR10, CIFAR10C, CINIC10, and Mini-ImageNet) confirm that CO-PFL consistently surpasses state-of-the-art methods in in personalization accuracy, robustness, scalability and convergence stability.
Problem

Research questions and friction points this paper is trying to address.

Addresses data heterogeneity and scarcity in federated learning
Overcomes aggregation bias from equal-contribution assumptions
Enhances personalization accuracy and optimization stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamically estimates client contributions for aggregation
Analyzes gradient and prediction deviations in dual subspaces
Integrates parameter-wise personalization with mask-aware momentum optimization
🔎 Similar Papers
No similar papers found.