🤖 AI Summary
This work addresses the limitation of existing soft prompt methods, such as CoOp, which lack an explicit mechanism to handle domain shift when encountering unseen domain distributions. To this end, the paper proposes DiCoOp, the first approach to incorporate domain invariance into prompt learning by optimizing context prompts through adversarial training. Without requiring any target-domain data, DiCoOp enables CLIP models to simultaneously preserve class-discriminative capability and achieve strong cross-domain generalization. The method establishes a domain-invariant context optimization framework that significantly outperforms CoOp across multiple visual domain generalization benchmarks, demonstrating superior zero-shot robustness to domain shifts.
📝 Abstract
Large pre-trained vision-language models like CLIP have transformed computer vision by aligning images and text in a shared feature space, enabling robust zero-shot transfer via prompting. Soft-prompting, such as Context Optimization (CoOp), effectively adapts these models for downstream recognition tasks by learning a set of context vectors. However, CoOp lacks explicit mechanisms for handling domain shifts across unseen distributions. To address this, we propose Domain-invariant Context Optimization (DiCoOp), an extension of CoOp optimized for domain generalization. By employing an adversarial training approach, DiCoOp forces the model to learn domain-invariant prompts while preserving discriminative power for classification. Experimental results show that DiCoOp consistently surpasses CoOp in domain generalization tasks across diverse visual domains.