🤖 AI Summary
Traditional visual prompting methods suffer from limited representational capacity and susceptibility to overfitting due to reliance on simple additive transformations, resulting in suboptimal performance compared to alternative adaptation techniques. To address this, we propose Affine-Color-Additive Visual Prompting (ACAVP), the first framework that jointly models affine transformations, color-space adjustments, and additive prompts—thereby substantially enhancing prompt expressivity. Additionally, we integrate TrivialAugment during prompt optimization to mitigate overfitting while preserving model robustness and inference efficiency. Evaluated across 12 image classification benchmarks, ACAVP achieves state-of-the-art performance among visual prompting methods with minimal parameter overhead (≤0.1% of backbone parameters). It consistently outperforms linear probing, yielding average accuracy gains of up to 12 percentage points on several datasets, and demonstrates superior out-of-distribution generalization.
📝 Abstract
Visual prompting (VP) has emerged as a promising parameter-efficient fine-tuning approach for adapting pre-trained vision models to downstream tasks without modifying model parameters. Despite offering advantages like negligible computational overhead and compatibility with black-box models, conventional VP methods typically achieve lower accuracy than other adaptation approaches. Our analysis reveals two critical limitations: the restricted expressivity of simple additive transformation and a tendency toward overfitting when the parameter count increases. To address these challenges, we propose ACAVP (Affine, Color, and Additive Visual Prompting), which enhances VP's expressive power by introducing complementary transformation operations: affine transformation for creating task-specific prompt regions while preserving original image information, and color transformation for emphasizing task-relevant visual features. Additionally, we identify that overfitting is a critical issue in VP training and introduce TrivialAugment as an effective data augmentation, which not only benefits our approach but also significantly improves existing VP methods, with performance gains of up to 12 percentage points on certain datasets. This demonstrates that appropriate data augmentation is universally beneficial for VP training. Extensive experiments across twelve diverse image classification datasets with two different model architectures demonstrate that ACAVP achieves state-of-the-art accuracy among VP methods, surpasses linear probing in average accuracy, and exhibits superior robustness to distribution shifts, all while maintaining minimal computational overhead during inference.