Generalizable Prompt Tuning for Vision-Language Models

πŸ“… 2024-10-04
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenge of balancing downstream task performance and cross-category generalization in prompt tuning for vision-language models (e.g., CLIP), this paper proposes a dual-view mutual information-driven prompt optimization framework. Methodologically, it models soft prompts and handcrafted prompts as two complementary text-modal views and jointly optimizes them via mutual information maximization. It further introduces a class-aware visual feature enhancement mechanism to improve robustness to unseen categories. Crucially, the approach preserves the flexibility of learnable soft prompts while explicitly enforcing semantic consistency between prompt views and discriminability with respect to visual features. Experiments on multiple standard benchmarks demonstrate consistent improvements in both downstream classification accuracy and zero-shot/few-shot generalization performance, significantly outperforming state-of-the-art prompt tuning methods.

Technology Category

Application Category

πŸ“ Abstract
Prompt tuning for vision-language models such as CLIP involves optimizing the text prompts used to generate image-text pairs for specific downstream tasks. While hand-crafted or template-based prompts are generally applicable to a wider range of unseen classes, they tend to perform poorly in downstream tasks (i.e., seen classes). Learnable soft prompts, on the other hand, often perform well in downstream tasks but lack generalizability. Additionally, prior research has predominantly concentrated on the textual modality, with very few studies attempting to explore the prompt's generalization potential from the visual modality. Keeping these limitations in mind, we investigate how to prompt tuning to obtain both a competitive downstream performance and generalization. The study shows that by treating soft and hand-crafted prompts as dual views of the textual modality, and maximizing their mutual information, we can better ensemble task-specific and general semantic information. Moreover, to generate more expressive prompts, the study introduces a class-wise augmentation from the visual modality, resulting in significant robustness to a wider range of unseen classes. Extensive evaluations on several benchmarks report that the proposed approach achieves competitive results in terms of both task-specific performance and general abilities.
Problem

Research questions and friction points this paper is trying to address.

CLIP Model
Prompt Engineering
Adaptability Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Engineering
Visual-Prompt Fusion
Generalization Enhancement
πŸ”Ž Similar Papers
No similar papers found.
Q
Qian Zhang
Northwestern Polytechnical University, Xi’an, China