Towards Straggler-Resilient Split Federated Learning: An Unbalanced Update Approach

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Split Federated Learning (SFL) suffers from severe scalability and training inefficiency due to strong synchronization requirements between clients and the server, making it highly vulnerable to straggler devices. To address this, we propose MU-SplitFed, an asynchronous SFL algorithm built upon a zeroth-order optimization framework that employs asymmetric τ-local-updates per client. This design decouples client activation/upload scheduling from server model aggregation, enabling asynchronous coordination of communication and computation. We theoretically establish a convergence rate of O(√(d/(τT))), achieving τ-fold reduction in communication rounds. Empirical evaluations demonstrate that MU-SplitFed significantly outperforms baseline methods under straggler-dominated settings; its adaptive τ-selection strategy effectively mitigates latency impact; and our open-source implementation validates both efficacy and practical deployability.

Technology Category

Application Category

📝 Abstract
Split Federated Learning (SFL) enables scalable training on edge devices by combining the parallelism of Federated Learning (FL) with the computational offloading of Split Learning (SL). Despite its great success, SFL suffers significantly from the well-known straggler issue in distributed learning systems. This problem is exacerbated by the dependency between Split Server and clients: the Split Server side model update relies on receiving activations from clients. Such synchronization requirement introduces significant time latency, making straggler a critical bottleneck to the scalability and efficiency of the system. To mitigate this problem, we propose MU-SplitFed, a straggler-resilient SFL algorithm in zeroth-order optimization that decouples training progress from straggler delays via a simple yet effective unbalanced update mechanism. By enabling the server to perform $τ$ local updates per client round, MU-SplitFed achieves a convergence rate of $O(sqrt{d/(τT)})$ for non-convex objectives, demonstrating a linear speedup of $τ$ in communication rounds. Experiments demonstrate that MU-SplitFed consistently outperforms baseline methods with the presence of stragglers and effectively mitigates their impact through adaptive tuning of $τ$. The code for this project is available at https://github.com/Johnny-Zip/MU-SplitFed.
Problem

Research questions and friction points this paper is trying to address.

Mitigating straggler delays in Split Federated Learning systems
Reducing synchronization latency between server and clients
Improving scalability through unbalanced update mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unbalanced update mechanism reduces straggler dependency
Server performs multiple local updates per client round
Zeroth-order optimization achieves linear speedup in convergence
🔎 Similar Papers
No similar papers found.
D
Dandan Liang
Rochester Institute of Technology, Rochester, New York
Jianing Zhang
Jianing Zhang
Purdue University
Federated LearningMultiple Agent SystemsDifferential Privacy
E
Evan Chen
Purdue University, West Lafayette, Indiana
Z
Zhe Li
Rochester Institute of Technology, Rochester, New York
R
Rui Li
Rochester Institute of Technology, Rochester, New York
Haibo Yang
Haibo Yang
Rochester Institute of Technology
Federated LearningOptimizationMachine Learning