SAGE: Spuriousness-Aware Guided Prompt Exploration for Mitigating Multimodal Bias

📅 2025-11-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large vision-language models (e.g., CLIP) suffer from multimodal spurious correlations—such as frequent co-occurrence of backgrounds and classes—leading to degraded out-of-distribution robustness in zero-shot classification. To address this, we propose a training-free, annotation-free, and prior-free prompt selection method that performs guided search over the prompt template space using a semantic separation metric, explicitly mitigating spurious feature bias. We provide the first theoretical analysis of the origin and impact of multimodal spurious bias and introduce a bias-aware zero-shot inference framework. Evaluated on four real-world benchmarks and five state-of-the-art VLMs, our approach significantly improves worst-group accuracy and generalization, outperforming all existing unsupervised prompt tuning methods.

Technology Category

Application Category

📝 Abstract
Large vision-language models, such as CLIP, have shown strong zero-shot classification performance by aligning images and text in a shared embedding space. However, CLIP models often develop multimodal spurious biases, which is the undesirable tendency to rely on spurious features. For example, CLIP may infer object types in images based on frequently co-occurring backgrounds rather than the object's core features. This bias significantly impairs the robustness of pre-trained CLIP models on out-of-distribution data, where such cross-modal associations no longer hold. Existing methods for mitigating multimodal spurious bias typically require fine-tuning on downstream data or prior knowledge of the bias, which undermines the out-of-the-box usability of CLIP. In this paper, we first theoretically analyze the impact of multimodal spurious bias in zero-shot classification. Based on this insight, we propose Spuriousness-Aware Guided Exploration (SAGE), a simple and effective method that mitigates spurious bias through guided prompt selection. SAGE requires no training, fine-tuning, or external annotations. It explores a space of prompt templates and selects the prompts that induce the largest semantic separation between classes, thereby improving worst-group robustness. Extensive experiments on four real-world benchmark datasets and five popular backbone models demonstrate that SAGE consistently improves zero-shot performance and generalization, outperforming previous zero-shot approaches without any external knowledge or model updates.
Problem

Research questions and friction points this paper is trying to address.

CLIP models develop multimodal spurious biases from background correlations
Existing bias mitigation methods require fine-tuning or prior bias knowledge
Spurious bias impairs CLIP robustness on out-of-distribution data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided prompt selection without model fine-tuning
Semantic separation maximization for bias mitigation
Training-free spurious bias reduction in vision-language models
🔎 Similar Papers
No similar papers found.
Wenqian Ye
Wenqian Ye
University of Virginia
Machine LearningAlignmentAgentic AIEmbodied Intelligence
D
Di Wang
Department of Computer Science, University of Virginia, USA
Guangtao Zheng
Guangtao Zheng
Accenture; University of Virginia
multimodal machine learningcomputer visionnatural language processingbioinformatics
B
Bohan Liu
Department of Computer Science, University of Virginia, USA
A
Aidong Zhang
Department of Computer Science, University of Virginia, USA