π€ AI Summary
To address the challenge of balancing downstream task performance and cross-category generalization in prompt tuning for vision-language models (e.g., CLIP), this paper proposes a dual-view mutual information-driven prompt optimization framework. Methodologically, it models soft prompts and handcrafted prompts as two complementary text-modal views and jointly optimizes them via mutual information maximization. It further introduces a class-aware visual feature enhancement mechanism to improve robustness to unseen categories. Crucially, the approach preserves the flexibility of learnable soft prompts while explicitly enforcing semantic consistency between prompt views and discriminability with respect to visual features. Experiments on multiple standard benchmarks demonstrate consistent improvements in both downstream classification accuracy and zero-shot/few-shot generalization performance, significantly outperforming state-of-the-art prompt tuning methods.
π Abstract
Prompt tuning for vision-language models such as CLIP involves optimizing the text prompts used to generate image-text pairs for specific downstream tasks. While hand-crafted or template-based prompts are generally applicable to a wider range of unseen classes, they tend to perform poorly in downstream tasks (i.e., seen classes). Learnable soft prompts, on the other hand, often perform well in downstream tasks but lack generalizability. Additionally, prior research has predominantly concentrated on the textual modality, with very few studies attempting to explore the prompt's generalization potential from the visual modality. Keeping these limitations in mind, we investigate how to prompt tuning to obtain both a competitive downstream performance and generalization. The study shows that by treating soft and hand-crafted prompts as dual views of the textual modality, and maximizing their mutual information, we can better ensemble task-specific and general semantic information. Moreover, to generate more expressive prompts, the study introduces a class-wise augmentation from the visual modality, resulting in significant robustness to a wider range of unseen classes. Extensive evaluations on several benchmarks report that the proposed approach achieves competitive results in terms of both task-specific performance and general abilities.