๐ค AI Summary
Current vision-language models (VLMs) are trained on static datasets and struggle to adapt to time-varying factual knowledge, often yielding outdated predictions and cross-modal inconsistencies. To address this limitation, this work introduces V-DyKnowโthe first dynamic benchmark for evaluating temporal knowledge in VLMs. The benchmark systematically assesses a modelโs ability to acquire, update, and maintain temporally accurate knowledge through multimodal knowledge editing, cross-modal retrieval-augmented generation (RAG), input perturbations, and mechanistic analysis. Empirical findings reveal that VLMs commonly suffer from knowledge obsolescence, with visual modalities exhibiting substantially lower reliability than textual ones, and existing alignment techniques proving insufficient for achieving consistent cross-modal knowledge updates.
๐ Abstract
Vision-Language Models (VLMs) are trained on data snapshots of documents, including images and texts. Their training data and evaluation benchmarks are typically static, implicitly treating factual knowledge as time-invariant. However, real-world facts are intrinsically time-sensitive and subject to erratic and periodic changes, causing model predictions to become outdated. We present V-DyKnow, a Visual Dynamic Knowledge benchmark for evaluating time-sensitive factual knowledge in VLMs. Using V-DyKnow, we benchmark closed- and open-source VLMs and analyze a) the reliability (correctness and consistency) of model responses across modalities and input perturbations; b) the efficacy of knowledge editing and multi-modal RAG methods for knowledge updates across modalities; and c) the sources of outdated predictions, through data and mechanistic analysis. Our results show that VLMs frequently output outdated facts, reflecting outdated snapshots used in the (pre-)training phase. Factual reliability degrades from textual to visual stimuli, even when entities are correctly recognized. Besides, existing alignment approaches fail to consistently update the models' knowledge across modalities. Together, these findings highlight fundamental limitations in how current VLMs acquire and update time-sensitive knowledge across modalities. We release the benchmark, code, and evaluation data.