🤖 AI Summary
Large vision-language models (e.g., CLIP) suffer from multimodal spurious correlations—such as frequent co-occurrence of backgrounds and classes—leading to degraded out-of-distribution robustness in zero-shot classification. To address this, we propose a training-free, annotation-free, and prior-free prompt selection method that performs guided search over the prompt template space using a semantic separation metric, explicitly mitigating spurious feature bias. We provide the first theoretical analysis of the origin and impact of multimodal spurious bias and introduce a bias-aware zero-shot inference framework. Evaluated on four real-world benchmarks and five state-of-the-art VLMs, our approach significantly improves worst-group accuracy and generalization, outperforming all existing unsupervised prompt tuning methods.
📝 Abstract
Large vision-language models, such as CLIP, have shown strong zero-shot classification performance by aligning images and text in a shared embedding space. However, CLIP models often develop multimodal spurious biases, which is the undesirable tendency to rely on spurious features. For example, CLIP may infer object types in images based on frequently co-occurring backgrounds rather than the object's core features. This bias significantly impairs the robustness of pre-trained CLIP models on out-of-distribution data, where such cross-modal associations no longer hold. Existing methods for mitigating multimodal spurious bias typically require fine-tuning on downstream data or prior knowledge of the bias, which undermines the out-of-the-box usability of CLIP. In this paper, we first theoretically analyze the impact of multimodal spurious bias in zero-shot classification. Based on this insight, we propose Spuriousness-Aware Guided Exploration (SAGE), a simple and effective method that mitigates spurious bias through guided prompt selection. SAGE requires no training, fine-tuning, or external annotations. It explores a space of prompt templates and selects the prompts that induce the largest semantic separation between classes, thereby improving worst-group robustness. Extensive experiments on four real-world benchmark datasets and five popular backbone models demonstrate that SAGE consistently improves zero-shot performance and generalization, outperforming previous zero-shot approaches without any external knowledge or model updates.