🤖 AI Summary
In dynamic edge environments, federated learning faces a fundamental trade-off among model accuracy, training efficiency, and client participation fairness. Method: This paper systematically evaluates fairness-driven client selection strategies—namely RBFF and RBCSF—against random and greedy baselines through empirical studies on CIFAR-10, Fashion-MNIST, and EMNIST. Contribution/Results: While fairness enhancement significantly improves participation equity across clients, it consistently degrades convergence speed and yields marginal gains in global model accuracy, exposing critical efficiency bottlenecks of existing fairness mechanisms under dynamic, resource-constrained edge conditions. This work provides the first quantitative characterization of the inverse relationship between fairness and training overhead. It identifies intrinsic limitations in current approaches—particularly inadequate heterogeneity modeling and poor timeliness responsiveness—and establishes empirically validated optimization pathways toward lightweight, edge-aware fair scheduling.
📝 Abstract
Federated learning (FL) has emerged as a transformative paradigm for edge intelligence, enabling collaborative model training while preserving data privacy across distributed personal devices. However, the inherent volatility of edge environments, characterized by dynamic resource availability and heterogeneous client capabilities, poses significant challenges for achieving high accuracy and fairness in client participation. This paper investigates the fundamental trade-off between model accuracy and fairness in highly volatile edge environments. This paper provides an extensive empirical evaluation of fairness-based client selection algorithms such as RBFF and RBCSF against random and greedy client selection regarding fairness, model performance, and time, in three benchmarking datasets (CIFAR10, FashionMNIST, and EMNIST). This work aims to shed light on the fairness-performance and fairness-speed trade-offs in a volatile edge environment and explore potential future research opportunities to address existing pitfalls in extit{fair client selection} strategies in FL. Our results indicate that more equitable client selection algorithms, while providing a marginally better opportunity among clients, can result in slower global training in volatile environmentsfootnote{The code for our experiments can be found at https://github.com/obaidullahzaland/FairFL_FLTA.