Can VLMs Recall Factual Associations From Visual References?

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual language models (VLMs) exhibit systematic deficiencies in recalling factual associations under visual referring expressions: while recall accuracy remains high for text-based references, it degrades significantly for visually grounded ones—revealing inadequate multimodal alignment. Through controlled experiments, we identify discriminative, failure-correlated patterns in the models’ internal states. Leveraging these patterns, we design a lightweight, fine-tuning-free probe that predicts unreliable responses with >92% accuracy, enabling selective reasoning. Applied to visual question answering, the probe achieves an absolute coverage gain of 7.87% and reduces error risk by 0.9%. This work establishes the first direct link between diagnostic analysis of internal states and reliability enhancement in VLMs, introducing a novel paradigm for evaluating multimodal alignment and enabling controllable, trustworthy multimodal inference.

Technology Category

Application Category

📝 Abstract
Through a controlled study, we identify a systematic deficiency in the multimodal grounding of Vision Language Models (VLMs). While VLMs can recall factual associations when provided a textual reference to an entity; their ability to do so is significantly diminished when the reference is visual instead. Forcing VLMs to rely on image representations of an entity halves their ability to recall factual knowledge, suggesting that VLMs struggle to link their internal knowledge of an entity with its image representation. We show that such linking failures are correlated with the expression of distinct patterns in model internal states, and that probes on these internal states achieve over 92% accuracy at flagging cases where the VLM response is unreliable. These probes can be applied, without retraining, to identify when a VLM will fail to correctly answer a question that requires an understanding of multimodal input. When used to facilitate selective prediction on a visual question answering task, the probes increase coverage by 7.87% (absolute) while also reducing the risk of error by 0.9% (absolute). Addressing the systematic, detectable deficiency is an important avenue in language grounding, and we provide informed recommendations for future directions.
Problem

Research questions and friction points this paper is trying to address.

VLMs struggle with factual recall from visual references
Internal state patterns detect VLM unreliability in multimodal tasks
Probes improve selective prediction accuracy in visual question answering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Probing internal states for error detection
Selective prediction to reduce error risk
Non-retraining reliability assessment method
🔎 Similar Papers
No similar papers found.