🤖 AI Summary
Visual Prompt Tuning (VPT) suffers from poor interpretability, hindering its trustworthy deployment and semantic knowledge discovery. To address this, we propose IVPT—the first interpretable VPT framework—centered on category-agnostic, hierarchical concept prototypes (e.g., texture → part → structure) that explicitly align visual prompts with multi-granularity semantic spaces. IVPT enables end-to-end, region-aware, and semantically disentangled prompt generation. Its methodology comprises hierarchical prototype learning, region-wise feature aggregation, semantics-driven prompt generation, and quantitative interpretability evaluation. On fine-grained classification benchmarks, IVPT significantly outperforms standard VPT and existing interpretable methods, while delivering verifiable, human-understandable, cross-granularity semantic explanations. Crucially, it achieves this without compromising accuracy, jointly optimizing task performance and explanation fidelity.
📝 Abstract
Visual prompt tuning offers significant advantages for adapting pre-trained visual foundation models to specific tasks. However, current research provides limited insight into the interpretability of this approach, which is essential for enhancing AI reliability and enabling AI-driven knowledge discovery. In this paper, rather than learning abstract prompt embeddings, we propose the first framework, named Interpretable Visual Prompt Tuning (IVPT), to explore interpretability for visual prompts, by introducing hierarchical concept prototypes. Specifically, visual prompts are linked to human-understandable semantic concepts, represented as a set of category-agnostic prototypes, each corresponding to a specific region of the image. Then, IVPT aggregates features from these regions to generate interpretable prompts, which are structured hierarchically to explain visual prompts at different granularities. Comprehensive qualitative and quantitative evaluations on fine-grained classification benchmarks show its superior interpretability and performance over conventional visual prompt tuning methods and existing interpretable methods.