Multimedia Verification Through Multi-Agent Deep Research Multimodal Large Language Models

πŸ“… 2025-07-06
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the challenges of geolocation, temporal attribution, and cross-platform provenance in multimedia misinformation detection, this paper proposes a multi-agent collaborative automated verification framework. Centered on multimodal large language models (MLLMs), the framework integrates reverse image search, metadata parsing, fact-checking databases, and authoritative news source processing into a six-stage deep analytical pipeline, enabling context-aware cross-verification across spatial, temporal, provenance, and motivational dimensions. Its key innovation lies in the first decoupled yet coordinated integration of MLLM-driven deep-reasoning agents with domain-specific toolchains, supporting dynamic task dispatching and evidence-based closed-loop validation. Evaluated on multiple challenging benchmarks, the system achieves substantial improvements: +23.6% in geolocation accuracy, +18.4% in temporal inference accuracy, and +31.2% in cross-platform provenance success rateβ€”thereby enabling explainable, trustworthy authenticity assessment in complex misinformation scenarios.

Technology Category

Application Category

πŸ“ Abstract
This paper presents our submission to the ACMMM25 - Grand Challenge on Multimedia Verification. We developed a multi-agent verification system that combines Multimodal Large Language Models (MLLMs) with specialized verification tools to detect multimedia misinformation. Our system operates through six stages: raw data processing, planning, information extraction, deep research, evidence collection, and report generation. The core Deep Researcher Agent employs four tools: reverse image search, metadata analysis, fact-checking databases, and verified news processing that extracts spatial, temporal, attribution, and motivational context. We demonstrate our approach on a challenge dataset sample involving complex multimedia content. Our system successfully verified content authenticity, extracted precise geolocation and timing information, and traced source attribution across multiple platforms, effectively addressing real-world multimedia verification scenarios.
Problem

Research questions and friction points this paper is trying to address.

Detect multimedia misinformation using MLLMs and verification tools
Verify content authenticity and extract geolocation information
Trace source attribution across multiple platforms effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent system with MLLMs and verification tools
Six-stage process including deep research and evidence collection
Four specialized tools for comprehensive multimedia analysis
H
Huy Hoan Le
Quy Nhon AI, FPT Software, Quy Nhon, Vietnam
V
Van Sy Thinh Nguyen
Quy Nhon AI, FPT Software, Quy Nhon, Vietnam
T
Thi Le Chi Dang
Quy Nhon AI, FPT Software, Quy Nhon, Vietnam
Vo Thanh Khang Nguyen
Vo Thanh Khang Nguyen
AI Researcher
Explainable AIAIReinforcement Learning
Truong Thanh Hung Nguyen
Truong Thanh Hung Nguyen
University of New Brunswick, National Research Council Canada
Contestable AIExplainable AIHuman-centered AIEdge Computing
H
Hung Cao
University of New Brunswick, Fredericton, New Brunswick, Canada