An Empirical Study of Federated Prompt Learning for Vision Language Model

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of prompt learning adaptation for vision-language models (VLMs) in federated learning (FL) under data heterogeneity—specifically label skew and domain shift. We empirically identify, for the first time, a performance divergence between language prompt tuning (LPT) and visual prompt tuning (VPT) in FL: LPT exhibits greater robustness to label skew, whereas VPT better handles domain shift. Building on this insight, we propose FPL, a dual-prompt collaborative optimization framework that integrates adaptive aggregation with heterogeneity-aware prompt updating. Extensive experiments across multiple cross-domain FL benchmarks demonstrate that FPL achieves an average accuracy improvement of 7.2% over baselines. Moreover, FPL shows strong robustness to variations in client count, aggregation strategy, and prompt length. This work establishes a novel paradigm and practical guidelines for efficient, robust, privacy-preserving deployment of VLMs in federated settings.

Technology Category

Application Category

📝 Abstract
The Vision Language Model (VLM) excels in aligning vision and language representations, and prompt learning has emerged as a key technique for adapting such models to downstream tasks. However, the application of prompt learning with VLM in federated learning (fl{}) scenarios remains underexplored. This paper systematically investigates the behavioral differences between language prompt learning (LPT) and vision prompt learning (VPT) under data heterogeneity challenges, including label skew and domain shift. We conduct extensive experiments to evaluate the impact of various fl{} and prompt configurations, such as client scale, aggregation strategies, and prompt length, to assess the robustness of Federated Prompt Learning (FPL). Furthermore, we explore strategies for enhancing prompt learning in complex scenarios where label skew and domain shift coexist, including leveraging both prompt types when computational resources allow. Our findings offer practical insights into optimizing prompt learning in federated settings, contributing to the broader deployment of VLMs in privacy-preserving environments.
Problem

Research questions and friction points this paper is trying to address.

Exploring prompt learning in federated VLM scenarios
Analyzing LPT and VPT under data heterogeneity challenges
Optimizing FPL robustness in privacy-preserving environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Prompt Learning for VLM adaptation
Evaluates LPT and VPT under data heterogeneity
Enhances prompts for label skew and domain shift
🔎 Similar Papers
Zhihao Wang
Zhihao Wang
Peking University
RoboticsReinforcement Learning
Wenke Huang
Wenke Huang
School of Computer Science, Wuhan University
Federated LearningMLLM
T
Tian Chen
School of Computer Science, Wuhan University
Zekun Shi
Zekun Shi
Unknown affiliation
Guancheng Wan
Guancheng Wan
Computer Science, UCLA
AI AgentAI4ScienceLarge Language ModelTrustworthy AI
Y
Yu Qiao
School of Computer Science, Wuhan University
B
Bin Yang
School of Computer Science, Wuhan University
J
Jian Wang
School of Computer Science, Wuhan University; Zhongguancun Laboratory, China
B
Bing Li
School of Computer Science, Wuhan University; Zhongguancun Laboratory, China
Mang Ye
Mang Ye
Professor, Wuhan University
Multimodal LearningPerson Re-identificationFederated Learning