🤖 AI Summary
Existing RAG-based visual question answering (VQA) methods rely on unstructured documents, which introduce noise and overlook structural relationships among knowledge facts, leading to suboptimal answer accuracy and reliability.
Method: We propose the first multimodal knowledge graph (KG)-enhanced framework for knowledge-intensive VQA. It comprises three core components: (1) construction of a high-quality, fine-grained, image-text-aligned multimodal KG; (2) a question-aware two-stage retrieval mechanism integrating MLLM-driven keyword extraction and cross-modal matching for precise subgraph retrieval; and (3) structured knowledge injection into the answer generation process.
Contribution/Results: Our framework achieves significant improvements over state-of-the-art methods across multiple knowledge-intensive VQA benchmarks. Experiments demonstrate that incorporating structured multimodal knowledge substantially enhances answer interpretability, accuracy, and robustness—validating its critical role in advancing knowledge-grounded VQA.
📝 Abstract
Recently, Retrieval-Augmented Generation (RAG) has been proposed to expand internal knowledge of Multimodal Large Language Models (MLLMs) by incorporating external knowledge databases into the generation process, which is widely used for knowledge-based Visual Question Answering (VQA) tasks. Despite impressive advancements, vanilla RAG-based VQA methods that rely on unstructured documents and overlook the structural relationships among knowledge elements frequently introduce irrelevant or misleading content, reducing answer accuracy and reliability. To overcome these challenges, a promising solution is to integrate multimodal knowledge graphs (KGs) into RAG-based VQA frameworks to enhance the generation by introducing structured multimodal knowledge. Therefore, in this paper, we propose a novel multimodal knowledge-augmented generation framework (mKG-RAG) based on multimodal KGs for knowledge-intensive VQA tasks. Specifically, our approach leverages MLLM-powered keyword extraction and vision-text matching to distill semantically consistent and modality-aligned entities/relationships from multimodal documents, constructing high-quality multimodal KGs as structured knowledge representations. In addition, a dual-stage retrieval strategy equipped with a question-aware multimodal retriever is introduced to improve retrieval efficiency while refining precision. Comprehensive experiments demonstrate that our approach significantly outperforms existing methods, setting a new state-of-the-art for knowledge-based VQA.