🤖 AI Summary
This work addresses the challenge of prompt learning adaptation for vision-language models (VLMs) in federated learning (FL) under data heterogeneity—specifically label skew and domain shift. We empirically identify, for the first time, a performance divergence between language prompt tuning (LPT) and visual prompt tuning (VPT) in FL: LPT exhibits greater robustness to label skew, whereas VPT better handles domain shift. Building on this insight, we propose FPL, a dual-prompt collaborative optimization framework that integrates adaptive aggregation with heterogeneity-aware prompt updating. Extensive experiments across multiple cross-domain FL benchmarks demonstrate that FPL achieves an average accuracy improvement of 7.2% over baselines. Moreover, FPL shows strong robustness to variations in client count, aggregation strategy, and prompt length. This work establishes a novel paradigm and practical guidelines for efficient, robust, privacy-preserving deployment of VLMs in federated settings.
📝 Abstract
The Vision Language Model (VLM) excels in aligning vision and language representations, and prompt learning has emerged as a key technique for adapting such models to downstream tasks. However, the application of prompt learning with VLM in federated learning (fl{}) scenarios remains underexplored. This paper systematically investigates the behavioral differences between language prompt learning (LPT) and vision prompt learning (VPT) under data heterogeneity challenges, including label skew and domain shift. We conduct extensive experiments to evaluate the impact of various fl{} and prompt configurations, such as client scale, aggregation strategies, and prompt length, to assess the robustness of Federated Prompt Learning (FPL). Furthermore, we explore strategies for enhancing prompt learning in complex scenarios where label skew and domain shift coexist, including leveraging both prompt types when computational resources allow. Our findings offer practical insights into optimizing prompt learning in federated settings, contributing to the broader deployment of VLMs in privacy-preserving environments.