🤖 AI Summary
Existing visual reprogramming (VR) methods on CLIP employ a single visual prompt and jointly optimize text embeddings for all downstream classes, neglecting discriminative modeling of multi-semantic attributes—such as shape, color, and texture—and often over-relying on weakly discriminative features. To address this, we propose a Decoupled Visual Prompting (DVP) framework coupled with a Probability Reweighting Matrix (PRM). DVP is the first to group and optimize prompts based on semantic causality or unsupervised clustering, enabling attribute-aware, disentangled prompt learning. PRM performs prompt-level probabilistic ensemble and provides interpretable contribution attribution. Theoretically, we derive an empirical risk bound guaranteeing generalization. Evaluated on 11 downstream datasets, our method achieves significant average performance gains over strong baselines—particularly excelling in fine-grained classification.
📝 Abstract
Model reprogramming adapts pretrained models to downstream tasks by modifying only the input and output spaces. Visual reprogramming (VR) is one instance for vision tasks that adds a trainable noise pattern (i.e., a visual prompt) to input images to facilitate downstream classification. The existing VR approaches for CLIP train a single visual prompt using all descriptions of different downstream classes. However, the limited learning capacity may result in (1) a failure to capture diverse aspects of the descriptions (e.g., shape, color, and texture), and (2) a possible bias toward less informative attributes that do not help distinguish between classes. In this paper, we introduce a decoupling-and-reweighting framework. Our decoupled visual prompts (DVP) are optimized using descriptions grouped by explicit causes (DVP-cse) or unsupervised clusters (DVP-cls). Then, we integrate the outputs of these visual prompts with a probabilistic reweighting matrix (PRM) that measures their contributions to each downstream class. Theoretically, DVP lowers the empirical risk bound. Experimentally, DVP outperforms baselines on average across 11 downstream datasets. Notably, the DVP-PRM integration enables insights into how individual visual prompts influence classification decisions, providing a probabilistic framework for understanding reprogramming. Our code is available at https://github.com/tmlr-group/DecoupledVP.