🤖 AI Summary
Split Federated Learning (SFL) suffers from severe scalability and training inefficiency due to strong synchronization requirements between clients and the server, making it highly vulnerable to straggler devices. To address this, we propose MU-SplitFed, an asynchronous SFL algorithm built upon a zeroth-order optimization framework that employs asymmetric τ-local-updates per client. This design decouples client activation/upload scheduling from server model aggregation, enabling asynchronous coordination of communication and computation. We theoretically establish a convergence rate of O(√(d/(τT))), achieving τ-fold reduction in communication rounds. Empirical evaluations demonstrate that MU-SplitFed significantly outperforms baseline methods under straggler-dominated settings; its adaptive τ-selection strategy effectively mitigates latency impact; and our open-source implementation validates both efficacy and practical deployability.
📝 Abstract
Split Federated Learning (SFL) enables scalable training on edge devices by combining the parallelism of Federated Learning (FL) with the computational offloading of Split Learning (SL). Despite its great success, SFL suffers significantly from the well-known straggler issue in distributed learning systems. This problem is exacerbated by the dependency between Split Server and clients: the Split Server side model update relies on receiving activations from clients. Such synchronization requirement introduces significant time latency, making straggler a critical bottleneck to the scalability and efficiency of the system. To mitigate this problem, we propose MU-SplitFed, a straggler-resilient SFL algorithm in zeroth-order optimization that decouples training progress from straggler delays via a simple yet effective unbalanced update mechanism.
By enabling the server to perform $τ$ local updates per client round, MU-SplitFed achieves a convergence rate of $O(sqrt{d/(τT)})$ for non-convex objectives, demonstrating a linear speedup of $τ$ in communication rounds. Experiments demonstrate that MU-SplitFed consistently outperforms baseline methods with the presence of stragglers and effectively mitigates their impact through adaptive tuning of $τ$. The code for this project is available at https://github.com/Johnny-Zip/MU-SplitFed.