Online-Score-Aided Federated Learning: Taming the Resource Constraints in Wireless Networks

📅 2024-08-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Federated learning (FL) for resource-constrained devices in wireless networks faces challenges including online data arrival, ultra-low storage capacity, and heterogeneous communication channels and computational capabilities—leading to slow convergence and high communication overhead. To address these, this work introduces the first joint modeling framework for online data streams under minimal storage constraints. It proposes a dynamic client scoring mechanism based on normalized gradient similarity to mitigate client drift caused by statistical heterogeneity, and designs an adaptive weighted aggregation scheme coupled with a lightweight local training scheduling strategy. Extensive experiments across two tasks, three datasets, and four model architectures demonstrate that the proposed method significantly outperforms six state-of-the-art baselines: convergence speed improves by up to 3.2×, and communication cost is reduced by 37%.

Technology Category

Application Category

📝 Abstract
While FL is a widely popular distributed ML strategy that protects data privacy, time-varying wireless network parameters and heterogeneous system configurations of the wireless device pose significant challenges. Although the limited radio and computational resources of the network and the clients, respectively, are widely acknowledged, two critical yet often ignored aspects are (a) wireless devices can only dedicate a small chunk of their limited storage for the FL task and (b) new training samples may arrive in an online manner in many practical wireless applications. Therefore, we propose a new FL algorithm called OSAFL, specifically designed to learn tasks relevant to wireless applications under these practical considerations. Since it has long been proven that under extreme resource constraints, clients may perform an arbitrary number of local training steps, which may lead to client drift under statistically heterogeneous data distributions, we leverage normalized gradient similarities and exploit weighting clients' updates based on optimized scores that facilitate the convergence rate of the proposed OSAFL algorithm. Our extensive simulation results on two different tasks -- each with three different datasets -- with four popular ML models validate the effectiveness of OSAFL compared to six existing state-of-the-art FL baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses resource constraints in wireless federated learning networks
Manages limited device storage and online data arrival challenges
Reduces client drift in heterogeneous data distributions efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online-score-aided federated learning algorithm
Normalized gradient similarities for client updates
Optimized scores for convergence rate improvement
🔎 Similar Papers
No similar papers found.
M
Md Ferdous Pervej
Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089 USA
Minseok Choi
Minseok Choi
Kyung Hee University
Wireless caching networkFederated learningStochastic network optimizationReinforcement learning
A
A. Molisch
Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA 90089 USA