FedDPC : Handling Data Heterogeneity and Partial Client Participation in Federated Learning

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning faces significant challenges from data heterogeneity and partial client participation, leading to high update variance, deviation from optimal global convergence, degraded model performance, and slow training. To address these issues, this paper proposes a unified optimization framework integrating projection constraints and adaptive scaling. Specifically, local gradient updates are projected onto the direction of the previous global update to mitigate heterogeneity-induced bias, while an adaptive scaling mechanism is introduced to stabilize aggregation under partial participation. This work is the first to jointly model and suppress both variance sources in a cohesive manner. Extensive experiments on heterogeneous image classification benchmarks—including CIFAR-10, CIFAR-100, and Tiny-ImageNet—demonstrate substantial improvements in convergence speed and test accuracy, consistently outperforming state-of-the-art methods such as FedAvg, FedProx, and SCAFFOLD.

Technology Category

Application Category

📝 Abstract
Data heterogeneity is a significant challenge in modern federated learning (FL) as it creates variance in local model updates, causing the aggregated global model to shift away from the true global optimum. Partial client participation in FL further exacerbates this issue by skewing the aggregation of local models towards the data distribution of participating clients. This creates additional variance in the global model updates, causing the global model to converge away from the optima of the global objective. These variances lead to instability in FL training, which degrades global model performance and slows down FL training. While existing literature primarily focuses on addressing data heterogeneity, the impact of partial client participation has received less attention. In this paper, we propose FedDPC, a novel FL method, designed to improve FL training and global model performance by mitigating both data heterogeneity and partial client participation. FedDPC addresses these issues by projecting each local update onto the previous global update, thereby controlling variance in both local and global updates. To further accelerate FL training, FedDPC employs adaptive scaling for each local update before aggregation. Extensive experiments on image classification tasks with multiple heterogeneously partitioned datasets validate the effectiveness of FedDPC. The results demonstrate that FedDPC outperforms state-of-the-art FL algorithms by achieving faster reduction in training loss and improved test accuracy across communication rounds.
Problem

Research questions and friction points this paper is trying to address.

Mitigates data heterogeneity in federated learning
Addresses partial client participation in federated learning
Improves global model performance and training speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Projects local updates onto previous global updates
Uses adaptive scaling for local updates before aggregation
Mitigates data heterogeneity and partial client participation
🔎 Similar Papers
No similar papers found.