🤖 AI Summary
Federal Domain Generalization (FedDG) faces two key challenges: weak generalization of a single global prompt and coarse-grained, image-level expert assignment in Mixture-of-Experts (MoE) methods—exacerbated by parameter-heavy routers inducing high communication overhead. To address these, we propose a token-level prompt mixture framework: (i) the first parameter-free token clustering scheme coupled with optimal transport–driven fine-grained expert routing, eliminating learnable routers; and (ii) instance-adaptive prompt synthesis and unbiased prompt expert learning to enhance personalized sample modeling. Built upon vision-language models and prompt learning, our method achieves state-of-the-art zero-shot generalization across four benchmarks. Crucially, it reduces per-round communication to only ~1K parameters—significantly lowering communication cost—while simultaneously improving personalized accuracy.
📝 Abstract
Federated domain generalization (FedDG) aims to learn a globally generalizable model from decentralized clients with heterogeneous data while preserving privacy. Recent studies have introduced prompt learning to adapt vision-language models (VLMs) in FedDG by learning a single global prompt. However, such a one-prompt-fits-all learning paradigm typically leads to performance degradation on personalized samples. Although the mixture of experts (MoE) offers a promising solution for specialization, existing MoE-based methods suffer from coarse image-level expert assignment and high communication costs from parameterized routers. To address these limitations, we propose TRIP, a Token-level prompt mixture with parameter-free routing framework for FedDG, which treats multiple prompts as distinct experts. Unlike existing image-level routing designs, TRIP assigns different tokens within an image to specific experts. To ensure communication efficiency, TRIP incorporates a parameter-free routing mechanism based on token clustering and optimal transport. The instance-specific prompt is then synthesized by aggregating experts, weighted by the number of tokens assigned to each. Additionally, TRIP develops an unbiased learning strategy for prompt experts, leveraging the VLM's zero-shot generalization capability. Extensive experiments across four benchmarks demonstrate that TRIP achieves optimal generalization results, with communication of only 1K parameters per round. Our code is available at https://github.com/GongShuai8210/TRIP.