DriftGuard: Mitigating Asynchronous Data Drift in Federated Learning

📅 2026-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of model performance degradation and high retraining costs in federated learning caused by asynchronous data drift across devices. To this end, the authors propose DriftGuard, a novel framework that integrates a Mixture-of-Experts (MoE) architecture into federated continual learning. DriftGuard decouples shared and local parameters and leverages device clustering to enable selective local retraining without sharing raw data, while dynamically balancing accuracy and computational cost. Extensive experiments demonstrate that DriftGuard achieves state-of-the-art or comparable accuracy across multiple datasets and model architectures, reduces retraining costs by up to 83%, and improves accuracy per unit cost by as much as 2.3× compared to existing methods.

Technology Category

Application Category

📝 Abstract
In real-world Federated Learning (FL) deployments, data distributions on devices that participate in training evolve over time. This leads to asynchronous data drift, where different devices shift at different times and toward different distributions. Mitigating such drift is challenging: frequent retraining incurs high computational cost on resource-constrained devices, while infrequent retraining degrades performance on drifting devices. We propose DriftGuard, a federated continual learning framework that efficiently adapts to asynchronous data drift. DriftGuard adopts a Mixture-of-Experts (MoE) inspired architecture that separates shared parameters, which capture globally transferable knowledge, from local parameters that adapt to group-specific distributions. This design enables two complementary retraining strategies: (i) global retraining, which updates the shared parameters when system-wide drift is identified, and (ii) group retraining, which selectively updates local parameters for clusters of devices identified via MoE gating patterns, without sharing raw data. Experiments across multiple datasets and models show that DriftGuard matches or exceeds state-of-the-art accuracy while reducing total retraining cost by up to 83%. As a result, it achieves the highest accuracy per unit retraining cost, improving over the strongest baseline by up to 2.3x. DriftGuard is available for download from https://github.com/blessonvar/DriftGuard.
Problem

Research questions and friction points this paper is trying to address.

Federated Learning
Asynchronous Data Drift
Data Distribution Shift
Continual Learning
Non-IID Data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Asynchronous Data Drift
Mixture-of-Experts
Continual Learning
Parameter Separation
🔎 Similar Papers
No similar papers found.