Multi-Stage Verification-Centric Framework for Mitigating Hallucination in Multi-Modal RAG

📅 2025-07-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address hallucination issues in vision-language models (VLMs) when processing egocentric images, long-tail entities, and multi-turn complex question answering, this paper proposes a fact-verification–centric, multi-stage conservative retrieval-augmented generation (RAG) framework. The framework comprises four modules: query routing, query-aware multimodal retrieval and summarization, dual-path generation (fact-guided and semantics-guided), and post-hoc consistency verification—balancing efficiency and controllability. Its key innovations are a lightweight routing mechanism and a synergistic dual-path generation design, which jointly suppress hallucinations and significantly improve factual accuracy in egocentric image understanding, long-tail knowledge reasoning, and multi-hop QA. Evaluated on the KDD Cup 2025 CRAG-MM Task 1 benchmark, the framework achieves 3rd place, demonstrating its effectiveness for high-fidelity multimodal question answering.

Technology Category

Application Category

📝 Abstract
This paper presents the technical solution developed by team CRUISE for the KDD Cup 2025 Meta Comprehensive RAG Benchmark for Multi-modal, Multi-turn (CRAG-MM) challenge. The challenge aims to address a critical limitation of modern Vision Language Models (VLMs): their propensity to hallucinate, especially when faced with egocentric imagery, long-tail entities, and complex, multi-hop questions. This issue is particularly problematic in real-world applications where users pose fact-seeking queries that demand high factual accuracy across diverse modalities. To tackle this, we propose a robust, multi-stage framework that prioritizes factual accuracy and truthfulness over completeness. Our solution integrates a lightweight query router for efficiency, a query-aware retrieval and summarization pipeline, a dual-pathways generation and a post-hoc verification. This conservative strategy is designed to minimize hallucinations, which incur a severe penalty in the competition's scoring metric. Our approach achieved 3rd place in Task 1, demonstrating the effectiveness of prioritizing answer reliability in complex multi-modal RAG systems. Our implementation is available at https://github.com/Breezelled/KDD-Cup-2025-Meta-CRAG-MM .
Problem

Research questions and friction points this paper is trying to address.

Mitigating hallucinations in multi-modal RAG systems
Addressing factual inaccuracies in complex multi-hop queries
Improving reliability of Vision Language Models (VLMs)
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-stage framework prioritizes factual accuracy
Lightweight query router enhances efficiency
Dual-pathways generation with post-hoc verification
🔎 Similar Papers
No similar papers found.