🤖 AI Summary
Existing data augmentation methods for CLIP prompt tuning rely on external knowledge sources, incur high computational costs, and neglect image-modality-specific features. To address these limitations, this paper proposes AugPT—a self-supervised, prompt-augmentation framework that operates solely on the original training data. Its core innovation is a consensus-test-based gating mechanism that leverages the pretrained CLIP model itself to automatically select high-quality augmented views, eliminating the need for external language models or knowledge bases. AugPT jointly integrates self-supervised image augmentation, lightweight prompt tuning, and knowledge distillation. Extensive experiments across multiple benchmarks demonstrate that AugPT significantly improves both in-domain accuracy and cross-domain generalization, while drastically reducing data curation overhead. To our knowledge, AugPT is the first method to achieve fully endogenous, modality-aware prompt augmentation for CLIP.
📝 Abstract
For CLIP-based prompt tuning, introducing more data as additional knowledge for enhancing fine-tuning process is proved to be an effective approach. Existing data amplification strategies for prompt tuning typically rely on external knowledge (e.g., large language models or pre-structured knowledge bases), resulting in higher costs for data collection and processing, while generally ignoring further utilization of features in image modality. To address this, we propose Augmentation-driven Prompt Tuning (AugPT), a self-contained distillation-based prompt tuning approach using only internal augmentation on raw dataset to better exploit known features. Specifically, AugPT employs self-supervised augmentation on unlabeled images in the training set, and introduces a novel gating mechanism based on consensus test, reusing the pre-trained prompt tuning backbone model to spontaneously filter noisy samples, further enhancing the quality of augmented views. Extensive experiments validate that AugPT simultaneously enhances model performance and generalization capability without using appended external knowledge. The code of AugPT is available at: https://github.com/JREion/AugPT .