DP2FL: Dual Prompt Personalized Federated Learning in Foundation Models

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address performance degradation in personalized federated learning (PFL) under data scarcity and high heterogeneity, this paper proposes a fine-tuning framework leveraging foundation models (e.g., CLIP). The method introduces two key innovations: (1) a novel dual-prompt mechanism—integrating task-aware global prompts with data-driven local prompts—to enhance few-shot adaptation; and (2) an adaptive dynamic-weight aggregation strategy enabling efficient personalized model fusion. This design significantly improves zero-shot plug-and-play capability for unseen clients and enables cross-source zero-shot generalization. Experiments demonstrate faster convergence and higher personalized accuracy under strong data heterogeneity, outperforming state-of-the-art PFL approaches in overall performance.

Technology Category

Application Category

📝 Abstract
Personalized federated learning (PFL) has garnered significant attention for its ability to address heterogeneous client data distributions while preserving data privacy. However, when local client data is limited, deep learning models often suffer from insufficient training, leading to suboptimal performance. Foundation models, such as CLIP (Contrastive Language-Image Pretraining), exhibit strong feature extraction capabilities and can alleviate this issue by fine-tuning on limited local data. Despite their potential, foundation models are rarely utilized in federated learning scenarios, and challenges related to integrating new clients remain largely unresolved. To address these challenges, we propose the Dual Prompt Personalized Federated Learning (DP2FL) framework, which introduces dual prompts and an adaptive aggregation strategy. DP2FL combines global task awareness with local data-driven insights, enabling local models to achieve effective generalization while remaining adaptable to specific data distributions. Moreover, DP2FL introduces a global model that enables prediction on new data sources and seamlessly integrates newly added clients without requiring retraining. Experimental results in highly heterogeneous environments validate the effectiveness of DP2FL's prompt design and aggregation strategy, underscoring the advantages of prediction on novel data sources and demonstrating the seamless integration of new clients into the federated learning framework.
Problem

Research questions and friction points this paper is trying to address.

Address heterogeneous data distributions in federated learning
Improve model performance with limited local client data
Integrate new clients seamlessly without retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual prompts enhance local model adaptability
Adaptive aggregation balances global and local insights
Global model enables seamless new client integration
🔎 Similar Papers
No similar papers found.
Y
Ying Chang
College of Software, Jilin University, Changchun, 130012, China
X
Xiaohu Shi
College of Software, Jilin University, Changchun, 130012, China; College of Computer Science and Technology, Jilin University, Changchun, 130012, China
Xiaohui Zhao
Xiaohui Zhao
Senior Lecturer, Federation University Australia
Business Process Management
Z
Zhaohuang Chen
College of Computer Science and Technology, Jilin University, Changchun, 130012, China
D
Deyin Ma
College of Computer Science and Engineering, Changchun University of Technology, Changchun, 130000, China