Seeing is Believing: Rich-Context Hallucination Detection for MLLMs via Backward Visual Grounding

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the pervasive hallucination problem in multimodal large language models (MLLMs) on cross-modal tasks, this paper proposes VBackChecker—the first reference-free, pixel-level hallucination detection framework. Its core innovation is the “backward visual grounding” mechanism: leveraging vision-language grounding models to perform fine-grained visual backtracking, jointly integrating reasoning and referring segmentation capabilities to verify pixel-level consistency between generated text and the original image across objects, attributes, and relations. To support this, we design an instruction-tuning data pipeline enabling context-aware grounding mask generation and hard negative sample mining, and introduce R²-HalBench—the first hallucination evaluation benchmark targeting real-world complex scenarios. Experiments demonstrate that VBackChecker achieves state-of-the-art performance on R²-HalBench, matching GPT-4o’s accuracy while improving pixel-level localization precision by over 10%.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have unlocked powerful cross-modal capabilities, but still significantly suffer from hallucinations. As such, accurate detection of hallucinations in MLLMs is imperative for ensuring their reliability in practical applications. To this end, guided by the principle of "Seeing is Believing", we introduce VBackChecker, a novel reference-free hallucination detection framework that verifies the consistency of MLLMgenerated responses with visual inputs, by leveraging a pixellevel Grounding LLM equipped with reasoning and referring segmentation capabilities. This reference-free framework not only effectively handles rich-context scenarios, but also offers interpretability. To facilitate this, an innovative pipeline is accordingly designed for generating instruction-tuning data (R-Instruct), featuring rich-context descriptions, grounding masks, and hard negative samples. We further establish R^2 -HalBench, a new hallucination benchmark for MLLMs, which, unlike previous benchmarks, encompasses real-world, rich-context descriptions from 18 MLLMs with high-quality annotations, spanning diverse object-, attribute, and relationship-level details. VBackChecker outperforms prior complex frameworks and achieves state-of-the-art performance on R^2 -HalBench, even rivaling GPT-4o's capabilities in hallucination detection. It also surpasses prior methods in the pixel-level grounding task, achieving over a 10% improvement. All codes, data, and models are available at https://github.com/PinxueGuo/VBackChecker.
Problem

Research questions and friction points this paper is trying to address.

Detecting hallucinations in Multimodal Large Language Models responses
Verifying consistency between generated text and visual inputs
Handling rich-context scenarios without requiring reference data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses pixel-level Grounding LLM for visual verification
Generates rich-context instruction data with hard negatives
Creates benchmark with real-world multimodal hallucination annotations
🔎 Similar Papers
No similar papers found.
Pinxue Guo
Pinxue Guo
Fudan University
Multimodal LLMVideo UnderstandingTracking and Segmentation
Chongruo Wu
Chongruo Wu
UC Davis
Computer Vision
X
Xinyu Zhou
College of Computational Science and Artificial Intelligence, Fudan University
Lingyi Hong
Lingyi Hong
Fudan University
Computer Vision
Zhaoyu Chen
Zhaoyu Chen
TikTok
AI SecurityTrustworthy AIMultimodal AIGenerative AI
J
Jinglun Li
College of Intelligent Robotics and Advanced Manufacturing, Fudan University
Kaixun Jiang
Kaixun Jiang
Fudan University
Computer VisionAdversarial Examples
Sen-ching Samson Cheung
Sen-ching Samson Cheung
IEEE Fellow. University of Kentucky and University of California Davis.
MultimediaComputer VisionSecuritySecure Multiparty ComputationBiometrics
W
Wei Zhang
College of Computational Science and Artificial Intelligence, Fudan University
W
Wenqiang Zhang
College of Computational Science and Artificial Intelligence, Fudan University