Dynamic Participation in Federated Learning: Benchmarks and a Knowledge Pool Plugin

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of systematic benchmarks and effective algorithms for dynamic participation in federated learning (DPFL), this paper introduces the first open-source DPFL benchmark framework, supporting configurable data distributions, dynamic participation patterns, and multi-dimensional evaluation. The core contribution is Knowledge-Pool Federated Learning (KPFL): a novel method featuring a dual-age mechanism to model client activity, integrating data-bias-aware weighting with generative knowledge distillation to preserve and transfer knowledge across active and idle clients over time; it further incorporates a distributed knowledge cache to enhance scalability. Experiments demonstrate that dynamic participation significantly degrades convergence speed and generalization performance. In contrast, KPFL effectively mitigates training oscillation and catastrophic forgetting, achieving an average 12.3% improvement in test accuracy and up to 2.1× faster convergence across diverse non-IID settings and high client dropout rates.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables clients to collaboratively train a shared model in a distributed manner, setting it apart from traditional deep learning paradigms. However, most existing FL research assumes consistent client participation, overlooking the practical scenario of dynamic participation (DPFL), where clients may intermittently join or leave during training. Moreover, no existing benchmarking framework systematically supports the study of DPFL-specific challenges. In this work, we present the first open-source framework explicitly designed for benchmarking FL models under dynamic client participation. Our framework provides configurable data distributions, participation patterns, and evaluation metrics tailored to DPFL scenarios. Using this platform, we benchmark four major categories of widely adopted FL models and uncover substantial performance degradation under dynamic participation. To address these challenges, we further propose Knowledge-Pool Federated Learning (KPFL), a generic plugin that maintains a shared knowledge pool across both active and idle clients. KPFL leverages dual-age and data-bias weighting, combined with generative knowledge distillation, to mitigate instability and prevent knowledge loss. Extensive experiments demonstrate the significant impact of dynamic participation on FL performance and the effectiveness of KPFL in improving model robustness and generalization.
Problem

Research questions and friction points this paper is trying to address.

Addresses performance degradation in federated learning with dynamic client participation
Introduces a benchmarking framework for evaluating dynamic participation scenarios
Proposes a knowledge pool plugin to mitigate instability and knowledge loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source framework for dynamic FL benchmarking
Knowledge pool plugin with dual-age weighting
Generative distillation prevents knowledge loss
🔎 Similar Papers
No similar papers found.
M
Ming-Lun Lee
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
F
Fu-Shiang Yang
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
C
Cheng-Kuan Lin
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
Yan-Ann Chen
Yan-Ann Chen
Dept. of Computer Science and Engineering, Yuan Zu University
Pervasive IntelligenceInternet of ThingsArtificial Intelligence of ThingsCyber-Physical System
C
Chih-Yu Lin
Department of Computer Science and Engineering, National Taiwan Ocean University, Keelung, Taiwan
Yu-Chee Tseng
Yu-Chee Tseng
College of AI, National Yang Ming Chiao Tung University
mobile computingwireless networkartificial intelligence