Efficient Federated Learning with Timely Update Dissemination

📅 2025-07-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address model update latency, increased model staleness, slow global convergence, and degraded accuracy caused by limited downlink bandwidth in federated learning, this paper proposes two novel bandwidth-aware update dissemination mechanisms: the asynchronous framework FedASMU and the synchronous framework FedSSMU. FedASMU introduces server-side dynamic weighted aggregation, while FedSSMU employs client-side adaptive model fusion; both jointly mitigate gradient bias and local model drift. Extensive experiments across six models and five public benchmark datasets demonstrate that the proposed methods achieve up to a 145.87% improvement in test accuracy and a 97.59% increase in training efficiency. Moreover, they significantly enhance model consistency and convergence stability. This work establishes a new paradigm for bandwidth-scalable, efficient federated learning.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) has emerged as a compelling methodology for the management of distributed data, marked by significant advancements in recent years. In this paper, we propose an efficient FL approach that capitalizes on additional downlink bandwidth resources to ensure timely update dissemination. Initially, we implement this strategy within an asynchronous framework, introducing the Asynchronous Staleness-aware Model Update (FedASMU), which integrates both server-side and device-side methodologies. On the server side, we present an asynchronous FL system model that employs a dynamic model aggregation technique, which harmonizes local model updates with the global model to enhance both accuracy and efficiency. Concurrently, on the device side, we propose an adaptive model adjustment mechanism that integrates the latest global model with local models during training to further elevate accuracy. Subsequently, we extend this approach to a synchronous context, referred to as FedSSMU. Theoretical analyses substantiate the convergence of our proposed methodologies. Extensive experiments, encompassing six models and five public datasets, demonstrate that FedASMU and FedSSMU significantly surpass baseline methods in terms of both accuracy (up to 145.87%) and efficiency (up to 97.59%).
Problem

Research questions and friction points this paper is trying to address.

Enhance FL efficiency using downlink bandwidth for timely updates
Integrate server-side and device-side methods for better accuracy
Propose synchronous and asynchronous FL approaches for improved performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asynchronous Staleness-aware Model Update (FedASMU)
Dynamic model aggregation technique
Adaptive model adjustment mechanism
🔎 Similar Papers
No similar papers found.
Juncheng Jia
Juncheng Jia
Soochow University
Edge IntelligenceFederated LearningInternet of ThingsMobile Computing
J
Ji Liu
HiThink Research, Hangzhou China
C
Chao Huo
Soochow University, Suzhou, China
Y
Yihui Shen
Soochow University, Suzhou, China
Y
Yang Zhou
Auburn University, United States
Huaiyu Dai
Huaiyu Dai
Professor of Electrical and Computer Engineering, NC State University
CommunicationsSignal ProcessingNetworkingSecurity and PrivacyMachine Learning
D
Dejing Dou
Fudan University, Shanghai and BEDI Cloud, Beijing, China