AdaptSFL: Adaptive Split Federated Learning in Resource-constrained Edge Networks

📅 2024-03-19
🏛️ arXiv.org
📈 Citations: 50
Influential: 1
📄 PDF
🤖 AI Summary
Resource constraints on edge devices hinder local training of deep models. Method: This paper proposes an Adaptive Split Federated Learning (SFL) framework that dynamically coordinates Model Splitting (MS) and Model Aggregation (MA) to jointly optimize communication overhead, computational load, and convergence speed. Contribution/Results: We establish the first theoretical convergence analysis framework for SFL, quantifying how split layer placement and aggregation strategies affect convergence rate. We further design the first resource-aware adaptive SFL scheduling mechanism enabling edge-cloud co-optimization. Extensive simulations across multiple datasets demonstrate that our approach significantly reduces time-to-target-accuracy compared to baseline methods—validating its advantages in low latency and high convergence efficiency.

Technology Category

Application Category

📝 Abstract
The increasing complexity of deep neural networks poses significant barriers to democratizing them to resource-limited edge devices. To address this challenge, split federated learning (SFL) has emerged as a promising solution by of floading the primary training workload to a server via model partitioning while enabling parallel training among edge devices. However, although system optimization substantially influences the performance of SFL under resource-constrained systems, the problem remains largely uncharted. In this paper, we provide a convergence analysis of SFL which quantifies the impact of model splitting (MS) and client-side model aggregation (MA) on the learning performance, serving as a theoretical foundation. Then, we propose AdaptSFL, a novel resource-adaptive SFL framework, to expedite SFL under resource-constrained edge computing systems. Specifically, AdaptSFL adaptively controls client-side MA and MS to balance communication-computing latency and training convergence. Extensive simulations across various datasets validate that our proposed AdaptSFL framework takes considerably less time to achieve a target accuracy than benchmarks, demonstrating the effectiveness of the proposed strategies.
Problem

Research questions and friction points this paper is trying to address.

Optimizing split federated learning for resource-limited edge devices
Balancing communication-computing latency and training convergence
Adaptive control of model splitting and client-side aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive model splitting for resource optimization
Client-side model aggregation balancing latency
Convergence analysis guiding adaptive SFL framework
🔎 Similar Papers
No similar papers found.