Flexible Personalized Split Federated Learning for On-Device Fine-Tuning of Foundation Models

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address challenges in personalizing foundation models for downstream tasks—namely, scarce client-side data, statistical heterogeneity, and constrained computational resources—this paper proposes a resource-adaptive personalized federated learning framework. The framework innovatively integrates split learning with personalized federated learning, enabling clients to dynamically partition the model according to local compute capacity and collaboratively train their respective segments. A gradient alignment strategy is introduced to facilitate efficient personalized adaptation atop the global model. Crucially, raw data remain on-device, eliminating data transmission and substantially reducing both communication and computation overhead. Extensive experiments demonstrate that the method achieves superior personalization efficiency and final accuracy compared to mainstream baselines (e.g., FedAvg, pFedMe), particularly under low-resource and highly heterogeneous settings. Under Non-IID data distributions, it yields absolute accuracy improvements of 3.2–5.7 percentage points.

Technology Category

Application Category

📝 Abstract
Fine-tuning foundation models is critical for superior performance on personalized downstream tasks, compared to using pre-trained models. Collaborative learning can leverage local clients' datasets for fine-tuning, but limited client data and heterogeneous data distributions hinder effective collaboration. To address the challenge, we propose a flexible personalized federated learning paradigm that enables clients to engage in collaborative learning while maintaining personalized objectives. Given the limited and heterogeneous computational resources available on clients, we introduce extbf{flexible personalized split federated learning (FlexP-SFL)}. Based on split learning, FlexP-SFL allows each client to train a portion of the model locally while offloading the rest to a server, according to resource constraints. Additionally, we propose an alignment strategy to improve personalized model performance on global data. Experimental results show that FlexP-SFL outperforms baseline models in personalized fine-tuning efficiency and final accuracy.
Problem

Research questions and friction points this paper is trying to address.

Addresses limited client data in collaborative foundation model fine-tuning
Solves heterogeneous data distribution challenges in federated learning
Optimizes resource-constrained on-device model training via flexible splitting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Flexible personalized split federated learning paradigm
Resource-adaptive local and server model splitting
Alignment strategy for global data performance
🔎 Similar Papers
No similar papers found.