🤖 AI Summary
Federated prompt learning (FPL) for vision-language models suffers from insufficient out-of-distribution (OOD) robustness and a fundamental trade-off between in-distribution (ID) accuracy and OOD generalization.
Method: We propose the first federated OOD-aware contextual optimization framework, centered on a novel tripartite prompt structure—comprising ID-global, local, and OOD-specific prompts—integrated with bilevel distributionally robust optimization and semi-imbalanced optimal transport to enforce cross-distribution discriminative consistency calibration.
Contribution/Results: Our method explicitly decouples and jointly models heterogeneous ID distributions and OOD shifts across clients. Extensive experiments on multiple real-world benchmarks demonstrate substantial improvements in generalization across diverse OOD corruptions while preserving high ID accuracy—achieving, for the first time in FPL, a systematic balance between performance and robustness.
📝 Abstract
Federated prompt learning (FPL) for vision-language models is a powerful approach to collaboratively adapt models across distributed clients while preserving data privacy. However, existing FPL approaches suffer from a trade-off between performance and robustness, particularly in out-of-distribution (OOD) shifts, limiting their reliability in real-world scenarios. The inherent in-distribution (ID) data heterogeneity among different clients makes it more challenging to maintain this trade-off. To fill this gap, we introduce a Federated OOD-aware Context Optimization (FOCoOp) framework, which captures diverse distributions among clients using ID global prompts, local prompts, and OOD prompts. Specifically, FOCoOp leverages three sets of prompts to create both class-level and distribution-level separations, which adapt to OOD shifts through bi-level distributionally robust optimization. Additionally, FOCoOp improves the discrimination consistency among clients, i.e., calibrating global prompts, seemingly OOD prompts, and OOD prompts by semi-unbalanced optimal transport. The extensive experiments on real-world datasets demonstrate that FOCoOp effectively captures decentralized heterogeneous distributions and enhances robustness of different OOD shifts. The project is available at GitHub.