VLDBench: Vision Language Models Disinformation Detection Benchmark

📅 2025-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The detection of multimodal disinformation—specifically, jointly forged text-image content—lacks systematic benchmarks and unified methodologies. Method: This paper introduces VLDBench, the first comprehensive benchmark for text-image fake news detection, comprising 31,000 news–image pairs across 13 thematic categories and supporting unified single- and multimodal evaluation. It proposes a semi-automated, expert-collaborative annotation protocol (Cohen’s κ = 0.78), aligned with global AI governance frameworks such as the EU AI Act; designs a multi-stage detection paradigm integrating large language models (LLMs) and vision-language models (VLMs); and establishes a standardized evaluation protocol grounded in NIST and MIT AI risk taxonomies. Contribution/Results: Experiments demonstrate that cross-modal fusion improves detection accuracy by 5–35% over unimodal baselines. The dataset, source code, and evaluation toolkit will be fully open-sourced to advance reproducible, governance-aware AI safety research.

Technology Category

Application Category

📝 Abstract
The rapid rise of AI-generated content has made detecting disinformation increasingly challenging. In particular, multimodal disinformation, i.e., online posts-articles that contain images and texts with fabricated information are specially designed to deceive. While existing AI safety benchmarks primarily address bias and toxicity, multimodal disinformation detection remains largely underexplored. To address this challenge, we present the Vision-Language Disinformation Detection Benchmark VLDBench, the first comprehensive benchmark for detecting disinformation across both unimodal (text-only) and multimodal (text and image) content, comprising 31,000} news article-image pairs, spanning 13 distinct categories, for robust evaluation. VLDBench features a rigorous semi-automated data curation pipeline, with 22 domain experts dedicating 300 plus hours} to annotation, achieving a strong inter-annotator agreement (Cohen kappa = 0.78). We extensively evaluate state-of-the-art Large Language Models (LLMs) and Vision-Language Models (VLMs), demonstrating that integrating textual and visual cues in multimodal news posts improves disinformation detection accuracy by 5 - 35 % compared to unimodal models. Developed in alignment with AI governance frameworks such as the EU AI Act, NIST guidelines, and the MIT AI Risk Repository 2024, VLDBench is expected to become a benchmark for detecting disinformation in online multi-modal contents. Our code and data will be publicly available.
Problem

Research questions and friction points this paper is trying to address.

Detects multimodal disinformation in online content
Evaluates text and image based disinformation detection
Improves accuracy with integrated textual and visual cues
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-automated data curation pipeline
Integration of textual and visual cues
Alignment with AI governance frameworks