WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated Learning over Mobile Devices via Adaptive Subnetwork Scheduling

πŸ“… 2024-05-01
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address high training latency in federated learning (FL) caused by dynamic heterogeneity in mobile devices’ computational capabilities and channel conditions, this paper proposes a utility-driven adaptive subnetwork scheduling mechanism. Unlike conventional fixed-subnetwork allocation schemes, our approach jointly models real-time device computation/communication resources and global training progress. We design the first utility function for subnetwork selection that explicitly integrates dynamic resource states with FL convergence requirements, enabling lightweight pruning and aggregation. Experimental results demonstrate that the proposed method reduces training latency by 37% compared to state-of-the-art baselines, without compromising model accuracy. This significantly enhances both the efficiency and practicality of FL in heterogeneous edge environments.

Technology Category

Application Category

πŸ“ Abstract
As a popular distributed learning paradigm, federated learning (FL) over mobile devices fosters numerous applications, while their practical deployment is hindered by participating devices' computing and communication heterogeneity. Some pioneering research efforts proposed to extract subnetworks from the global model, and assign as large a subnetwork as possible to the device for local training based on its full computing and communications capacity. Although such fixed size subnetwork assignment enables FL training over heterogeneous mobile devices, it is unaware of (i) the dynamic changes of devices' communication and computing conditions and (ii) FL training progress and its dynamic requirements of local training contributions, both of which may cause very long FL training delay. Motivated by those dynamics, in this paper, we develop a wireless and heterogeneity aware latency efficient FL (WHALE-FL) approach to accelerate FL training through adaptive subnetwork scheduling. Instead of sticking to the fixed size subnetwork, WHALE-FL introduces a novel subnetwork selection utility function to capture device and FL training dynamics, and guides the mobile device to adaptively select the subnetwork size for local training based on (a) its computing and communication capacity, (b) its dynamic computing and/or communication conditions, and (c) FL training status and its corresponding requirements for local training contributions. Our evaluation shows that, compared with peer designs, WHALE-FL effectively accelerates FL training without sacrificing learning accuracy.
Problem

Research questions and friction points this paper is trying to address.

Addresses computing and communication heterogeneity in federated learning over mobile devices.
Proposes adaptive subnetwork scheduling to reduce FL training delays.
Enhances FL efficiency by dynamically adjusting subnetwork sizes based on device conditions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive subnetwork scheduling for FL
Dynamic device condition awareness
Novel subnetwork selection utility function
πŸ”Ž Similar Papers
No similar papers found.