Generalizable Vision-Language Few-Shot Adaptation with Predictive Prompts and Negative Learning

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the weak generalization of vision-language models under few-shot learning with label-noisy support sets. We propose PromptFuseNL, a novel framework featuring task-conditioned residual prototype optimization and a multi-stage cross-modal collaboration mechanism. Our method integrates predictive prompt tuning, dual-branch (positive/negative) contrastive learning, and a lightweight cross-modal feature fusion module. To mitigate label noise without additional supervision, we introduce semantic hard negative mining and unsupervised instance reweighting. Evaluated on 15 few-shot vision-language benchmarks, PromptFuseNL consistently outperforms existing prompt-based and adapter-based methods, establishing new state-of-the-art performance. Moreover, it achieves a 300× speedup in training time and reduces FLOPs by 1000×, significantly enhancing both computational efficiency and robustness to label noise.

Technology Category

Application Category

📝 Abstract
Few-shot adaptation remains a core challenge for vision-language models (VLMs), especially under limited supervision and noisy support samples. We propose PromptFuseNL, a unified framework that enhances few-shot generalization by combining predictive prompt tuning with dual-branch positive and negative learning. The method refines class prototypes through task-conditioned residuals, multi-stage cross-modal coordination, and semantic hard negative mining. To address label noise, we introduce an unsupervised instance reweighting strategy that downweights unreliable support examples without requiring additional labels or structural changes. PromptFuseNL fuses visual and textual cues through lightweight modules for efficient and discriminative prediction. Evaluated across 15 benchmarks, it consistently surpasses existing prompt- and adapter-based methods in all shot settings while remaining highly efficient, achieving up to 300x faster training and 1000x lower FLOPs compared to full prompt tuning, achieving a new state-of-the-art for robust and scalable few-shot vision-language adaptation.
Problem

Research questions and friction points this paper is trying to address.

Enhancing few-shot generalization in vision-language models
Addressing label noise in limited supervision scenarios
Improving efficiency and scalability of few-shot adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Predictive prompt tuning with dual-branch learning
Unsupervised instance reweighting for label noise
Lightweight cross-modal fusion for efficient prediction
🔎 Similar Papers
No similar papers found.