Edge AI in Highly Volatile Environments: Is Fairness Worth the Accuracy Trade-off?

📅 2025-11-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In dynamic edge environments, federated learning faces a fundamental trade-off among model accuracy, training efficiency, and client participation fairness. Method: This paper systematically evaluates fairness-driven client selection strategies—namely RBFF and RBCSF—against random and greedy baselines through empirical studies on CIFAR-10, Fashion-MNIST, and EMNIST. Contribution/Results: While fairness enhancement significantly improves participation equity across clients, it consistently degrades convergence speed and yields marginal gains in global model accuracy, exposing critical efficiency bottlenecks of existing fairness mechanisms under dynamic, resource-constrained edge conditions. This work provides the first quantitative characterization of the inverse relationship between fairness and training overhead. It identifies intrinsic limitations in current approaches—particularly inadequate heterogeneity modeling and poor timeliness responsiveness—and establishes empirically validated optimization pathways toward lightweight, edge-aware fair scheduling.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has emerged as a transformative paradigm for edge intelligence, enabling collaborative model training while preserving data privacy across distributed personal devices. However, the inherent volatility of edge environments, characterized by dynamic resource availability and heterogeneous client capabilities, poses significant challenges for achieving high accuracy and fairness in client participation. This paper investigates the fundamental trade-off between model accuracy and fairness in highly volatile edge environments. This paper provides an extensive empirical evaluation of fairness-based client selection algorithms such as RBFF and RBCSF against random and greedy client selection regarding fairness, model performance, and time, in three benchmarking datasets (CIFAR10, FashionMNIST, and EMNIST). This work aims to shed light on the fairness-performance and fairness-speed trade-offs in a volatile edge environment and explore potential future research opportunities to address existing pitfalls in extit{fair client selection} strategies in FL. Our results indicate that more equitable client selection algorithms, while providing a marginally better opportunity among clients, can result in slower global training in volatile environmentsfootnote{The code for our experiments can be found at https://github.com/obaidullahzaland/FairFL_FLTA.
Problem

Research questions and friction points this paper is trying to address.

Investigates accuracy-fairness trade-off in volatile edge AI environments
Evaluates fairness-based client selection algorithms in federated learning systems
Analyzes impact of equitable client selection on training speed and performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates fairness-based client selection algorithms
Compares RBFF and RBCSF against random selection
Analyzes trade-offs between fairness and training speed
🔎 Similar Papers
No similar papers found.
O
Obaidullah Zaland
Department of Computing Science, Umeå University, Umeå, SE-90187, Sweden
Feras M. Awaysheh
Feras M. Awaysheh
Associate Professor of Edge Intelligence, Umea University, Sweden
Cloud Computing/Big DataEdge AIFederated LearningIndustry 4.0/IIoTData Privacy
S
Sawsan Al Zubi
University of Santiago de Compostela, Santiago de Compostela, Spain
A
Abdul Rahman Safi
Kabul University, Kabul, Afghanistan
Monowar Bhuyan
Monowar Bhuyan
Associate Professor & WASP Fellow, Umeå University, Sweden.
Machine learningAnomaly detectionSystems and AI securityDistributed systems