🤖 AI Summary
To address three key challenges in federated adaptation of vision-language models (VLMs)—inadequate multimodal information exploitation, difficulty modeling strong data heterogeneity across clients, and reliance on additional training resources—this paper proposes TOFA, the first training-free, one-shot federated adaptation framework. TOFA achieves adaptation via a single round of client-server interaction, employing a dual-channel architecture for vision and language modalities. It introduces hierarchical Bayesian modeling to learn personalized prototype distributions and designs an adaptive weight calibration mechanism to fuse multimodal predictions, thereby enabling global text prompt alignment while preserving local semantic adaptation. Extensive experiments across nine datasets and diverse federated settings demonstrate that TOFA incurs zero training overhead and minimal communication cost, yet consistently outperforms state-of-the-art methods. It exhibits exceptional efficiency, robustness to data heterogeneity, and strong generalization across unseen domains and tasks.
📝 Abstract
Efficient and lightweight adaptation of pre-trained Vision-Language Models (VLMs) to downstream tasks through collaborative interactions between local clients and a central server is a rapidly emerging research topic in federated learning. Existing adaptation algorithms are typically trained iteratively, which incur significant communication costs and increase the susceptibility to potential attacks. Motivated by the one-shot federated training techniques that reduce client-server exchanges to a single round, developing a lightweight one-shot federated VLM adaptation method to alleviate these issues is particularly attractive. However, current one-shot approaches face certain challenges in adapting VLMs within federated settings: (1) insufficient exploitation of the rich multimodal information inherent in VLMs; (2) lack of specialized adaptation strategies to systematically handle the severe data heterogeneity; and (3) requiring additional training resource of clients or server. To bridge these gaps, we propose a novel Training-free One-shot Federated Adaptation framework for VLMs, named TOFA. To fully leverage the generalizable multimodal features in pre-trained VLMs, TOFA employs both visual and textual pipelines to extract task-relevant representations. In the visual pipeline, a hierarchical Bayesian model learns personalized, class-specific prototype distributions. For the textual pipeline, TOFA evaluates and globally aligns the generated local text prompts for robustness. An adaptive weight calibration mechanism is also introduced to combine predictions from both modalities, balancing personalization and robustness to handle data heterogeneity. Our method is training-free, not relying on additional training resources on either the client or server side. Extensive experiments across 9 datasets in various federated settings demonstrate the effectiveness of the proposed TOFA method.