Raw Data Matters: Enhancing Prompt Tuning by Internal Augmentation on Vision-Language Models

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing data augmentation methods for CLIP prompt tuning rely on external knowledge sources, incur high computational costs, and neglect image-modality-specific features. To address these limitations, this paper proposes AugPT—a self-supervised, prompt-augmentation framework that operates solely on the original training data. Its core innovation is a consensus-test-based gating mechanism that leverages the pretrained CLIP model itself to automatically select high-quality augmented views, eliminating the need for external language models or knowledge bases. AugPT jointly integrates self-supervised image augmentation, lightweight prompt tuning, and knowledge distillation. Extensive experiments across multiple benchmarks demonstrate that AugPT significantly improves both in-domain accuracy and cross-domain generalization, while drastically reducing data curation overhead. To our knowledge, AugPT is the first method to achieve fully endogenous, modality-aware prompt augmentation for CLIP.

Technology Category

Application Category

📝 Abstract
For CLIP-based prompt tuning, introducing more data as additional knowledge for enhancing fine-tuning process is proved to be an effective approach. Existing data amplification strategies for prompt tuning typically rely on external knowledge (e.g., large language models or pre-structured knowledge bases), resulting in higher costs for data collection and processing, while generally ignoring further utilization of features in image modality. To address this, we propose Augmentation-driven Prompt Tuning (AugPT), a self-contained distillation-based prompt tuning approach using only internal augmentation on raw dataset to better exploit known features. Specifically, AugPT employs self-supervised augmentation on unlabeled images in the training set, and introduces a novel gating mechanism based on consensus test, reusing the pre-trained prompt tuning backbone model to spontaneously filter noisy samples, further enhancing the quality of augmented views. Extensive experiments validate that AugPT simultaneously enhances model performance and generalization capability without using appended external knowledge. The code of AugPT is available at: https://github.com/JREion/AugPT .
Problem

Research questions and friction points this paper is trying to address.

Enhancing CLIP-based prompt tuning without external knowledge
Utilizing internal image features for data augmentation
Improving model performance and generalization via self-supervised augmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Internal augmentation on raw dataset
Self-supervised image augmentation
Gating mechanism filters noisy samples
🔎 Similar Papers
No similar papers found.
H
Haoyang Li
Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, Australia
L
Liang Wang
Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, Australia
C
Chao Wang
School of Mechanical Engineering and Automation, Shanghai University, Shanghai, China
S
Siyu Zhou
Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, Australia
J
Jing Jiang
Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, Australia
Yan Peng
Yan Peng
Professor, Shanghai University
Robotics
Guodong Long
Guodong Long
Associate Professor, Faculty of Engineering and IT, University of Technology Sydney
Federated LearningFoundation ModelsFederated IntelligenceFoundation AgentsDigital Health