MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language Models

πŸ“… 2024-06-17
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current large vision-language models (LVLMs) exhibit factual inconsistencies in multimodal reasoning, yet systematic fact-checking evaluation remains absent. Method: We introduce MFC-Benchβ€”the first fine-grained benchmark for Multimodal Fact Checking (MFC)β€”built on authentic news articles and featuring a three-stage evaluation framework: manipulation detection, contextual integrity assessment, and veracity classification. It integrates human annotation with adversarial perturbations to enable granular diagnostic analysis. Contribution/Results: Evaluated on 12 state-of-the-art LVLMs, our experiments reveal consistently low accuracy (<60%) on manipulation detection and contextual integrity tasks, exposing severe insensitivity to image-text manipulations. MFC-Bench fills a critical gap in trustworthy multimodal reasoning assessment and provides a reproducible, structured evaluation paradigm to advance factual consistency in LVLMs.

Technology Category

Application Category

πŸ“ Abstract
Large vision-language models (LVLMs) have significantly improved multimodal reasoning tasks, such as visual question answering and image captioning. These models embed multimodal facts within their parameters, rather than relying on external knowledge bases to store factual information explicitly. However, the content discerned by LVLMs may deviate from factuality due to inherent bias or incorrect inference. To address this issue, we introduce MFC-Bench, a rigorous and comprehensive benchmark designed to evaluate the factual accuracy of LVLMs across three stages of verdict prediction for MFC: Manipulation, Out-of-Context, and Veracity Classification. Through our evaluation on MFC-Bench, we benchmarked a dozen diverse and representative LVLMs, uncovering that current models still fall short in multimodal fact-checking and demonstrate insensitivity to various forms of manipulated content. We hope that MFC-Bench could raise attention to the trustworthy AI potentially assisted by LVLMs in the future. The MFC-Bench and accompanying resources are publicly accessible at https://github.com/wskbest/MFC-Bench, contributing to ongoing research in the multimodal fact-checking field.
Problem

Research questions and friction points this paper is trying to address.

Evaluating factual accuracy of LVLMs
Benchmarking multimodal fact-checking stages
Assessing sensitivity to manipulated content
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal fact-checking benchmark
Evaluates LVLMs accuracy
Publicly accessible resources
πŸ”Ž Similar Papers
No similar papers found.
S
Shengkang Wang
Beijing University of Posts and Telecommunications
Hongzhan Lin
Hongzhan Lin
Hong Kong Baptist University
Natural Language ProcessingMultimodal ReasoningSocial Computing
Ziyang Luo
Ziyang Luo
Salesforce AI Research
AgentsLLMsMultimodal
Z
Zhen Ye
Hong Kong University of Science and Technology
G
Guang Chen
Beijing University of Posts and Telecommunications
J
Jing Ma
Hong Kong Baptist University