Seeing is Believing: Robust Vision-Guided Cross-Modal Prompt Learning under Label Noise

📅 2026-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the insufficient robustness of vision-language models under label noise in prompt-based learning and proposes VisPrompt, a novel framework that introduces instance-level visual evidence as stable anchors. By leveraging a cross-modal attention mechanism, VisPrompt injects visual semantics back into prompt representations and incorporates a lightweight conditional modulation module to adaptively control the fusion strength of image-text information across samples. The method enables efficient fine-tuning while keeping the pre-trained model parameters frozen. Extensive experiments demonstrate that VisPrompt significantly outperforms existing approaches across seven benchmark datasets under both synthetic and real-world label noise scenarios, effectively suppressing noise interference and mitigating memorization of incorrect labels.

Technology Category

Application Category

📝 Abstract
Prompt learning is a parameter-efficient approach for vision-language models, yet its robustness under label noise is less investigated. Visual content contains richer and more reliable semantic information, which remains more robust under label noise. However, the prompt itself is highly susceptible to label noise. Motivated by this intuition, we propose VisPrompt, a lightweight and robust vision-guided prompt learning framework for noisy-label settings. Specifically, we exploit a cross-modal attention mechanism to reversely inject visual semantics into prompt representations. This enables the prompt tokens to selectively aggregate visual information relevant to the current sample, thereby improving robustness by anchoring prompt learning to stable instance-level visual evidence and reducing the influence of noisy supervision. To address the instability caused by using the same way of injecting visual information for all samples, despite differences in the quality of their visual cues, we further introduce a lightweight conditional modulation mechanism to adaptively control the strength of visual information injection, which strikes a more robust balance between text-side semantic priors and image-side instance evidence. The proposed framework effectively suppresses the noise-induced disturbances, reduce instability in prompt updates, and alleviate memorization of mislabeled samples. VisPrompt significantly improves robustness while keeping the pretrained VLM backbone frozen and introducing only a small amount of additional trainable parameters. Extensive experiments under synthetic and real-world label noise demonstrate that VisPrompt generally outperforms existing baselines on seven benchmark datasets and achieves stronger robustness. Our code is publicly available at https://github.com/gezbww/Vis_Prompt.
Problem

Research questions and friction points this paper is trying to address.

prompt learning
label noise
vision-language models
robustness
cross-modal
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-guided prompt learning
label noise robustness
cross-modal attention
conditional modulation
parameter-efficient tuning
🔎 Similar Papers
No similar papers found.
Z
Zibin Geng
Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing, China
Xuefeng Jiang
Xuefeng Jiang
Institute of Computing Technology, Chinese Academy of Sciences
Weakly-supervised LearningDistributed OptimizationAutonomous DrivingNoisy Label Learning
J
Jia Li
Institute of Information Engineering, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing, China
Zheng Li
Zheng Li
Nankai University
Computer VisionVision-Language ModelsMulti-Modal learning
T
Tian Wen
Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing, China
L
Lvhua Wu
Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences, Beijing, China
Sheng Sun
Sheng Sun
Institute of Computing Technology, Chinese Academy of Sciences
federated learningedge intelligence
Yuwei Wang
Yuwei Wang
Institute of Computing Technology Chinese Academy of Sciences
Mobile Edge ComputingEdge intelligenceUnmanned systems network collaboration
Min Liu
Min Liu
Institute of computing technology, CAS
ComputingNetworking