Personalized Hierarchical Split Federated Learning in Wireless Networks

📅 2024-11-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of resource-constrained clients in wireless networks—balancing communication overhead, computational energy consumption, and personalized model performance—this paper proposes Hierarchical Personalized Split Federated Learning (HiPSFL). HiPSFL adopts a two-stage paradigm: (i) collaborative global training of a shared model backbone at the server, and (ii) local fine-tuning of lightweight, client-specific classifiers with their parameters frozen during server aggregation. Theoretical analysis characterizes the impact of model splitting and hierarchical aggregation on convergence behavior. Experiments demonstrate that HiPSFL achieves comparable global model accuracy to state-of-the-art methods while improving personalized accuracy by 12.7% on average, reducing total communication volume by 38%, and decreasing measured energy consumption on edge devices by 29%.

Technology Category

Application Category

📝 Abstract
Extreme resource constraints make large-scale machine learning (ML) with distributed clients challenging in wireless networks. On the one hand, large-scale ML requires massive information exchange between clients and server(s). On the other hand, these clients have limited battery and computation powers that are often dedicated to operational computations. Split federated learning (SFL) is emerging as a potential solution to mitigate these challenges, by splitting the ML model into client-side and server-side model blocks, where only the client-side block is trained on the client device. However, practical applications require personalized models that are suitable for the client's personal task. Motivated by this, we propose a personalized hierarchical split federated learning (PHSFL) algorithm that is specially designed to achieve better personalization performance. More specially, owing to the fact that regardless of the severity of the statistical data distributions across the clients, many of the features have similar attributes, we only train the body part of the federated learning (FL) model while keeping the (randomly initialized) classifier frozen during the training phase. We first perform extensive theoretical analysis to understand the impact of model splitting and hierarchical model aggregations on the global model. Once the global model is trained, we fine-tune each client classifier to obtain the personalized models. Our empirical findings suggest that while the globally trained model with the untrained classifier performs quite similarly to other existing solutions, the fine-tuned models show significantly improved personalized performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses resource constraints in wireless networks for large-scale ML.
Proposes personalized hierarchical split federated learning for better personalization.
Fine-tunes client classifiers to improve personalized model performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized hierarchical split federated learning algorithm
Client-side model training with frozen classifier
Fine-tuning client classifiers for personalized models
🔎 Similar Papers
No similar papers found.