Empirical Analysis of Asynchronous Federated Learning on Heterogeneous Devices: Efficiency, Fairness, and Privacy Trade-offs

📅 2025-05-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the tripartite trade-off among efficiency, fairness, and privacy in federated learning (FL) under device heterogeneity. For the first time, it jointly quantifies these three dimensions on a realistic multi-tier edge testbed. Experiments reveal that asynchronous FL (e.g., FedAsync) accelerates convergence tenfold, yet introduces severe imbalances: high-capability clients update 6–10× more frequently, incurring a fivefold increase in privacy leakage; low-capability clients suffer from sparse, stale updates and excessive noise from local differential privacy (LDP), degrading model accuracy significantly. To resolve this, we propose an adaptive FL protocol design paradigm grounded in client capability and participation dynamics. By co-optimizing aggregation mechanisms and privacy budget allocation, our approach overcomes systemic limitations of static protocols in heterogeneous environments—achieving balanced efficiency, fairness, and privacy preservation across diverse edge devices.

Technology Category

Application Category

📝 Abstract
Device heterogeneity poses major challenges in Federated Learning (FL), where resource-constrained clients slow down synchronous schemes that wait for all updates before aggregation. Asynchronous FL addresses this by incorporating updates as they arrive, substantially improving efficiency. While its efficiency gains are well recognized, its privacy costs remain largely unexplored, particularly for high-end devices that contribute updates more frequently, increasing their cumulative privacy exposure. This paper presents the first comprehensive analysis of the efficiency-fairness-privacy trade-off in synchronous vs. asynchronous FL under realistic device heterogeneity. We empirically compare FedAvg and staleness-aware FedAsync using a physical testbed of five edge devices spanning diverse hardware tiers, integrating Local Differential Privacy (LDP) and the Moments Accountant to quantify per-client privacy loss. Using Speech Emotion Recognition (SER) as a privacy-critical benchmark, we show that FedAsync achieves up to 10x faster convergence but exacerbates fairness and privacy disparities: high-end devices contribute 6-10x more updates and incur up to 5x higher privacy loss, while low-end devices suffer amplified accuracy degradation due to infrequent, stale, and noise-perturbed updates. These findings motivate the need for adaptive FL protocols that jointly optimize aggregation and privacy mechanisms based on client capacity and participation dynamics, moving beyond static, one-size-fits-all solutions.
Problem

Research questions and friction points this paper is trying to address.

Analyzes efficiency-fairness-privacy trade-offs in asynchronous Federated Learning
Investigates privacy disparities for high-end vs low-end devices in FL
Proposes adaptive FL protocols for heterogeneous device capabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asynchronous FL for efficient heterogeneous device updates
LDP and Moments Accountant for privacy quantification
Adaptive FL protocols optimizing aggregation and privacy
🔎 Similar Papers
No similar papers found.