Mitigating Participation Imbalance Bias in Asynchronous Federated Learning

📅 2025-11-24
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the “heterogeneity amplification” problem in asynchronous federated learning (AFL), wherein uneven client participation—especially frequent updates from fast clients under non-IID data—exacerbates global model bias and slows convergence. To mitigate this, we propose a full-client participation mechanism and design two buffer-free, immediate-update algorithms: ACE (Asynchronous Client Engagement) and its delay-aware variant ACED. Both employ dynamic participation control, latency-adaptive weight adjustment, and full-client gradient aggregation, resolving the tension between update staleness and client heterogeneity without introducing auxiliary storage overhead. Theoretical analysis establishes convergence guarantees under realistic asynchrony and heterogeneity. Extensive experiments across multiple tasks demonstrate that ACE and ACED significantly improve convergence speed and model robustness—particularly in high-heterogeneity and high-latency regimes—outperforming state-of-the-art AFL baselines.

Technology Category

Application Category

📝 Abstract
In Asynchronous Federated Learning (AFL), the central server immediately updates the global model with each arriving client's contribution. As a result, clients perform their local training on different model versions, causing information staleness (delay). In federated environments with non-IID local data distributions, this asynchronous pattern amplifies the adverse effect of client heterogeneity (due to different data distribution, local objectives, etc.), as faster clients contribute more frequent updates, biasing the global model. We term this phenomenon heterogeneity amplification. Our work provides a theoretical analysis that maps AFL design choices to their resulting error sources when heterogeneity amplification occurs. Guided by our analysis, we propose ACE (All-Client Engagement AFL), which mitigates participation imbalance through immediate, non-buffered updates that use the latest information available from all clients. We also introduce a delay-aware variant, ACED, to balance client diversity against update staleness. Experiments on different models for different tasks across diverse heterogeneity and delay settings validate our analysis and demonstrate the robust performance of our approaches.
Problem

Research questions and friction points this paper is trying to address.

Mitigating participation imbalance bias in asynchronous federated learning systems
Addressing heterogeneity amplification caused by faster clients dominating updates
Solving information staleness from clients training on different model versions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Immediate non-buffered updates engage all clients
Delay-aware variant balances diversity against staleness
Mitigates participation imbalance using latest client information
🔎 Similar Papers
No similar papers found.
X
Xiangyu Chang
University of California, Riverside
M
Manyi Yao
University of California, Riverside
S
Srikanth V. Krishnamurthy
University of California, Riverside
Christian R. Shelton
Christian R. Shelton
University of California, Riverside
A
Anirban Chakraborty
Indian Institute of Science
Ananthram Swami
Ananthram Swami
Army Research Laboratory
Samet Oymak
Samet Oymak
University of Michigan | Google Research
machine learningdecision makingstatisticsoptimizationlanguage models
A
Amit Roy-Chowdhury
University of California, Riverside