🤖 AI Summary
To address the challenge of simultaneously achieving personalized adaptation and cross-domain generalization for vision-language models (e.g., CLIP) in federated learning under heterogeneous and decentralized data, this paper proposes the first personalized federated learning framework tailored for multimodal models. Our method introduces modality-specific adapters alongside a globally shared projection layer—integrated within a federated setting—and employs an asymmetric optimization strategy that transmits only lightweight shared parameters, enabling communication efficiency while supporting deep local personalization. This design significantly improves cross-device feature alignment and semantic consistency. Extensive experiments across 11 benchmark datasets demonstrate that our approach achieves state-of-the-art trade-offs between personalization and generalization under both domain shift and label shift scenarios, consistently outperforming existing federated prompt tuning methods.
📝 Abstract
Vision-Language Models (VLMs) like CLIP have demonstrated remarkable generalization in zero- and few-shot settings, but adapting them efficiently to decentralized, heterogeneous data remains a challenge. While prompt tuning has emerged as a popular parameter-efficient approach in personalized federated learning, existing methods often sacrifice generalization in favor of personalization, struggling particularly on unseen classes or domains. In this work, we propose pFedMMA, the first personalized federated learning framework that leverages multi-modal adapters for vision-language tasks. Each adapter contains modality-specific up- and down-projection layers alongside a globally shared projection that aligns cross-modal features. Our asymmetric optimization strategy allows clients to locally adapt to personalized data distributions while collaboratively training the shared projection to improve global generalization. This design is also communication-efficient, as only the shared component is exchanged during rounds. Through extensive experiments across eleven datasets, including domain- and label-shift scenarios, we show that pFedMMA achieves state-of-the-art trade-offs between personalization and generalization, outperforming recent federated prompt tuning methods. The code is available at https://github.com/sajjad-ucsb/pFedMMA.