Enhancing Efficiency in Multidevice Federated Learning through Data Selection

πŸ“… 2022-11-08
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address inefficient training in multi-device federated learning caused by resource-constrained edge devices, this paper proposes Centaurβ€”a novel framework that introduces an on-demand client-side data selection mechanism at the edge and pioneers a coupled paradigm of data selection and model sharding for single-user, multi-device scenarios. Centaur jointly optimizes challenges arising from non-IID data distributions, device heterogeneity, and dynamic mobility. It enables collaborative deep neural network (DNN) model sharding across lightweight and resource-rich devices, significantly improving resource utilization. Extensive experiments across five neural architectures and six benchmark datasets demonstrate an average 19% improvement in classification accuracy and a 58% reduction in federated training latency. The implementation is open-sourced to advance research in decentralized federated learning.
πŸ“ Abstract
Ubiquitous wearable and mobile devices provide access to a diverse set of data. However, the mobility demand for our devices naturally imposes constraints on their computational and communication capabilities. A solution is to locally learn knowledge from data captured by ubiquitous devices, rather than to store and transmit the data in its original form. In this paper, we develop a federated learning framework, called Centaur, to incorporate on-device data selection at the edge, which allows partition-based training of a deep neural nets through collaboration between constrained and resourceful devices within the multidevice ecosystem of the same user. We benchmark on five neural net architecture and six datasets that include image data and wearable sensor time series. On average, Centaur achieves ~19% higher classification accuracy and ~58% lower federated training latency, compared to the baseline. We also evaluate Centaur when dealing with imbalanced non-iid data, client participation heterogeneity, and different mobility patterns. To encourage further research in this area, we release our code at https://github.com/nokia-bell-labs/data-centric-federated-learning
Problem

Research questions and friction points this paper is trying to address.

Improving federated learning efficiency in multidevice systems
Addressing computational constraints in mobile and wearable devices
Enhancing accuracy and reducing latency in distributed training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning with on-device data selection
Partition-based training via device collaboration
Improved accuracy and reduced training latency
πŸ”Ž Similar Papers
No similar papers found.
F
Fan Mo
Imperial College London, UK
M
M. Malekzadeh
Nokia Bell Labs, UK
S
S. Chatterjee
Nokia Bell Labs, UK
F
F. Kawsar
Nokia Bell Labs and University of Glasgow, UK
Akhil Mathur
Akhil Mathur
Meta AI
Large Language ModelsDeep LearningOn-device MLML Systems