🤖 AI Summary
This work addresses the weak generalization of vision-language models under few-shot learning with label-noisy support sets. We propose PromptFuseNL, a novel framework featuring task-conditioned residual prototype optimization and a multi-stage cross-modal collaboration mechanism. Our method integrates predictive prompt tuning, dual-branch (positive/negative) contrastive learning, and a lightweight cross-modal feature fusion module. To mitigate label noise without additional supervision, we introduce semantic hard negative mining and unsupervised instance reweighting. Evaluated on 15 few-shot vision-language benchmarks, PromptFuseNL consistently outperforms existing prompt-based and adapter-based methods, establishing new state-of-the-art performance. Moreover, it achieves a 300× speedup in training time and reduces FLOPs by 1000×, significantly enhancing both computational efficiency and robustness to label noise.
📝 Abstract
Few-shot adaptation remains a core challenge for vision-language models (VLMs), especially under limited supervision and noisy support samples. We propose PromptFuseNL, a unified framework that enhances few-shot generalization by combining predictive prompt tuning with dual-branch positive and negative learning. The method refines class prototypes through task-conditioned residuals, multi-stage cross-modal coordination, and semantic hard negative mining. To address label noise, we introduce an unsupervised instance reweighting strategy that downweights unreliable support examples without requiring additional labels or structural changes. PromptFuseNL fuses visual and textual cues through lightweight modules for efficient and discriminative prediction. Evaluated across 15 benchmarks, it consistently surpasses existing prompt- and adapter-based methods in all shot settings while remaining highly efficient, achieving up to 300x faster training and 1000x lower FLOPs compared to full prompt tuning, achieving a new state-of-the-art for robust and scalable few-shot vision-language adaptation.