🤖 AI Summary
This paper addresses the problem of entity-level inconsistency—specifically among persons, locations, and events—between textual and visual modalities in news articles. To this end, we propose the first large vision-language model (LVLM) framework dedicated to cross-modal entity consistency verification. Methodologically, we introduce a novel reference-image-based structured prompting strategy, construct a multi-granularity human-annotated dataset with ground-truth labels for persons, locations, and events, and design an end-to-end multimodal entity alignment and verification pipeline. Experimental results demonstrate significant improvements in accuracy for person- and event-consistency identification, and state-of-the-art performance on location-event verification, surpassing existing baselines. All datasets and code are publicly released, establishing a new paradigm and benchmark resource for cross-modal misinformation detection.
📝 Abstract
The web has become a crucial source of information, but it is also used to spread disinformation, often conveyed through multiple modalities like images and text. The identification of inconsistent cross-modal information, in particular entities such as persons, locations, and events, is critical to detect disinformation. Previous works either identify out-of-context disinformation by assessing the consistency of images to the whole document, neglecting relations of individual entities, or focus on generic entities that are not relevant to news. So far, only few approaches have addressed the task of validating entity consistency between images and text in news. However, the potential of large vision-language models (LVLMs) has not been explored yet. In this paper, we propose an LVLM-based framework for verifying Cross-modal Entity Consistency~(LVLM4CEC), to assess whether persons, locations and events in news articles are consistent across both modalities. We suggest effective prompting strategies for LVLMs for entity verification that leverage reference images crawled from web. Moreover, we extend three existing datasets for the task of entity verification in news providing manual ground-truth data. Our results show the potential of LVLMs for automating cross-modal entity verification, showing improved accuracy in identifying persons and events when using evidence images. Moreover, our method outperforms a baseline for location and event verification in documents. The datasets and source code are available on GitHub at url{https://github.com/TIBHannover/LVLM4CEC}.