🤖 AI Summary
To address the high communication overhead and energy consumption in federated learning (FL) under edge computing constraints—particularly limited battery life and bandwidth—this paper proposes a similarity-aware update control mechanism. The server clusters devices based on model parameter similarity using K-means or DBSCAN, and selects only one representative device per cluster for global model aggregation. Crucially, this work is the first to incorporate user-behavior-informed model similarity into FL update scheduling, enabling sparse communication without accuracy loss. Experiments on a Raspberry Pi–Android prototype platform and multiple benchmark datasets demonstrate a 40–65% reduction in communication volume and substantial decreases in client-side energy consumption. Long-term evaluation shows no statistically significant difference in test accuracy compared to FedAvg, with fluctuations under 0.3%.
📝 Abstract
Federated learning is a distributed machine learning framework to collaboratively train a global model without uploading privacy-sensitive data onto a centralized server. Usually, this framework is applied to edge devices such as smartphones, wearable devices, and Internet of Things (IoT) devices which closely collect information from users. However, these devices are mostly battery-powered. The update procedure of federated learning will constantly consume the battery power and the transmission bandwidth. In this work, we propose an update control for federated learning, FedSAUC, by considering the similarity of users’ behaviors (models). At the server side, we exploit clustering algorithms to group devices with similar models. Then we select some representatives for each cluster to update information to train the model. We also implemented a testbed prototyping on edge devices for validating the performance. The experimental results show that this update control will not affect the training accuracy in the long run.