FedAdaVR: Adaptive Variance Reduction for Robust Federated Learning under Limited Client Participation

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of gradient noise, client drift, and participation bias arising from partial client participation in federated learning. To mitigate these issues under data heterogeneity, the authors propose FedAdaVR, an algorithm that integrates adaptive optimization with variance reduction by caching and reusing historical local updates to simulate the contributions of non-participating clients. Furthermore, they introduce a quantized variant, FedAdaVR-Quant, which reduces memory consumption by 50%–87.5% while preserving model performance. Theoretical analysis establishes, for the first time in the non-convex setting, the elimination of bias induced by partial participation. Empirical results demonstrate that the proposed method consistently outperforms existing approaches in both IID and non-IID scenarios.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) encounters substantial challenges due to heterogeneity, leading to gradient noise, client drift, and partial client participation errors, the last of which is the most pervasive but remains insufficiently addressed in current literature. In this paper, we propose FedAdaVR, a novel FL algorithm aimed at solving heterogeneity issues caused by sporadic client participation by incorporating an adaptive optimiser with a variance reduction technique. This method takes advantage of the most recent stored updates from clients, even when they are absent from the current training round, thereby emulating their presence. Furthermore, we propose FedAdaVR-Quant, which stores client updates in quantised form, significantly reducing the memory requirements (by 50%, 75%, and 87.5%) of FedAdaVR while maintaining equivalent model performance. We analyse the convergence behaviour of FedAdaVR under general nonconvex conditions and prove that our proposed algorithm can eliminate partial client participation error. Extensive experiments conducted on multiple datasets, under both independent and identically distributed (IID) and non-IID settings, demonstrate that FedAdaVR consistently outperforms state-of-the-art baseline methods.
Problem

Research questions and friction points this paper is trying to address.

federated learning
client heterogeneity
partial client participation
gradient noise
client drift
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Variance Reduction
Adaptive Optimization
Client Participation
Quantization
🔎 Similar Papers
No similar papers found.
S
S. M. Ruhul
School of Computing and Mathematical Sciences, University of Leicester, UK
K
Kabir Howlader
School of Computing and Mathematical Sciences, University of Leicester, UK
X
Xiao Chen
School of Computing and Mathematical Sciences, University of Leicester, UK
Yifei Xie
Yifei Xie
University of Edinburgh
OptimizationBlockchainDecentralized AIOperations Research
Lu Liu
Lu Liu
University of Exeter
Artificial IntelligenceDigital TwinsHealthcareSustainabilityDistributed Systems