FVG-PT: Adaptive Foreground View-Guided Prompt Tuning for Vision-Language Models

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance degradation commonly observed in CLIP-based prompt tuning methods when adapting to downstream tasks, which often stems from neglecting the shift in foreground attention representations within the visual encoder. To this end, we propose FVG-PT, a plug-and-play adaptive foreground attention guidance module that, for the first time, explicitly links foreground attention shift to prompt tuning failure. FVG-PT dynamically steers and balances visual attention through the synergistic operation of three components: foreground reliability gating, attention distillation compensation, and prior calibration. Notably, our method requires no modification to the backbone architecture and seamlessly integrates into the prompt tuning pipeline of vision-language models such as CLIP. Extensive experiments across multiple backbones and datasets demonstrate significant improvements in downstream task adaptation, validating the effectiveness and generalizability of our approach.

Technology Category

Application Category

📝 Abstract
CLIP-based prompt tuning enables pretrained Vision-Language Models (VLMs) to efficiently adapt to downstream tasks. Although existing studies have made significant progress, they pay limited attention to changes in the internal attention representations of VLMs during the tuning process. In this paper, we attribute the failure modes of prompt tuning predictions to shifts in foreground attention of the visual encoder, and propose Foreground View-Guided Prompt Tuning (FVG-PT), an adaptive plug-and-play foreground attention guidance module, to alleviate the shifts. Concretely, FVG-PT introduces a learnable Foreground Reliability Gate to automatically enhance the foreground view quality, applies a Foreground Distillation Compensation module to guide visual attention toward the foreground, and further introduces a Prior Calibration module to mitigate generalization degradation caused by excessive focus on the foreground. Experiments on multiple backbone models and datasets show the effectiveness and compatibility of FVG-PT. Codes are available at: https://github.com/JREion/FVG-PT
Problem

Research questions and friction points this paper is trying to address.

prompt tuning
foreground attention
vision-language models
attention shift
CLIP
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prompt Tuning
Foreground Attention
Vision-Language Models
Adaptive Guidance
Attention Calibration
🔎 Similar Papers
No similar papers found.