ð€ AI Summary
Large vision-language models (e.g., CLIP) exhibit high sensitivity to natural language prompt templates, hindering their generalization and practical deployment in downstream tasks. To address this, we propose MVPâa novel framework that decouples prompt templates from class names via structural modeling, and introduces a variational autoencoder (VAE) to learn interpretable, robust prompt distributions. This design preserves human readability while overcoming the opacity of existing learnable prompt methods. Building upon MVP, we construct RobustPrompt, a benchmark encompassing six categories and hundreds of diverse prompt templates, along with a multi-template robustness evaluation framework. Extensive experiments across 11 datasets demonstrate that MVP significantly improves model robustness against prompt variations without compromising original task performance. Both the code and the RobustPrompt benchmark are publicly released to advance standardization in vision-language model prompt engineering.
ð Abstract
Large pre-trained vision-language models (VLMs) offer a promising approach to leveraging human language for enhancing downstream tasks. However, VLMs such as CLIP face significant limitation: its performance is highly sensitive to prompt template design. Although prompt learning methods can address the sensitivity issue by replacing natural language prompts with learnable ones, they are incomprehensible to humans. Ensuring consistent performance across various prompt templates enables models to adapt seamlessly to diverse phrasings, enhancing their ability to handle downstream tasks without requiring extensive prompt engineering. In this work, we introduce the RobustPrompt Benchmark, a systematic benchmark to evaluate robustness to different prompt templates for VLMs. It includes a dataset with hundreds of carefully designed prompt templates, divided into six types, covering a wide variety of commonly used templates. Beside the benchmark, we propose Modeling Variants of Prompts (MVP), a simple yet effective method that mitigates sensitivity by modeling variants of prompt structures. The innovation of MVP lies in decoupling prompts into templates and class names, and using Variational Autoencoders (VAE) to model the distribution of diverse prompt structures. Experiments across 11 datasets demonstrate that MVP can greatly enhance model robustness to variations in input prompts without a drop in performance. The code is available at https://github.com/liaolea/MVP.