Local Performance vs. Out-of-Distribution Generalization: An Empirical Analysis of Personalized Federated Learning in Heterogeneous Data Environments

📅 2025-10-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the client drift and generalization imbalance in federated learning under non-independent and identically distributed (Non-IID) data, this paper identifies a critical limitation of existing personalization methods: their excessive focus on local accuracy while neglecting out-of-distribution (OOD) generalization—a fundamental pillar of FedAvg’s robustness. We propose a unified evaluation paradigm that jointly optimizes local accuracy and OOD generalization, and design FLIU, an adaptive personalization update mechanism. Within the FedAvg framework, FLIU introduces learnable, client-specific scaling factors to dynamically balance global consistency and local adaptability. Extensive experiments across MNIST and CIFAR-10 under IID, pathological Non-IID, and Dirichlet Non-IID settings demonstrate that FLIU achieves high local accuracy while significantly improving OOD generalization—outperforming state-of-the-art personalized federated learning methods.

Technology Category

Application Category

📝 Abstract
In the context of Federated Learning with heterogeneous data environments, local models tend to converge to their own local model optima during local training steps, deviating from the overall data distributions. Aggregation of these local updates, e.g., with FedAvg, often does not align with the global model optimum (client drift), resulting in an update that is suboptimal for most clients. Personalized Federated Learning approaches address this challenge by exclusively focusing on the average local performances of clients' models on their own data distribution. Generalization to out-of-distribution samples, which is a substantial benefit of FedAvg and represents a significant component of robustness, appears to be inadequately incorporated into the assessment and evaluation processes. This study involves a thorough evaluation of Federated Learning approaches, encompassing both their local performance and their generalization capabilities. Therefore, we examine different stages within a single communication round to enable a more nuanced understanding of the considered metrics. Furthermore, we propose and incorporate a modified approach of FedAvg, designated as Federated Learning with Individualized Updates (FLIU), extending the algorithm by a straightforward individualization step with an adaptive personalization factor. We evaluate and compare the approaches empirically using MNIST and CIFAR-10 under various distributional conditions, including benchmark IID and pathological non-IID, as well as additional novel test environments with Dirichlet distribution specifically developed to stress the algorithms on complex data heterogeneity.
Problem

Research questions and friction points this paper is trying to address.

Addressing client drift in federated learning with heterogeneous data distributions
Evaluating local performance versus out-of-distribution generalization in personalized FL
Proposing individualized updates to improve both local and global model performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Individualized updates with adaptive personalization factor
Extends FedAvg via simple personalization step
Evaluated under diverse data heterogeneity conditions
🔎 Similar Papers
No similar papers found.
M
Mortesa Hussaini
Computational Science Hub (CSH) & Dept. Artificial Intelligence in Agricultural Engineering, University of Hohenheim, Stuttgart, Germany
J
Jan Theiß
Computational Science Hub (CSH) & Dept. Artificial Intelligence in Agricultural Engineering, University of Hohenheim, Stuttgart, Germany
Anthony Stein
Anthony Stein
Artificial Intelligence in Agricultural Engineering, University of Hohenheim
Intelligent Agricultural TechnologyOrganic ComputingEvolutionary MLLearning Classifier Systems