MMKB-RAG: A Multi-Modal Knowledge-Based Retrieval-Augmented Generation Framework

📅 2025-04-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the timeliness deficiency and hallucination issues in multimodal large language models (MLLMs) stemming from parametric knowledge, this paper proposes a dynamic semantic label–driven retrieval-augmented generation (RAG) framework. Our method introduces, for the first time, a dynamic semantic labeling mechanism grounded in the model’s intrinsic knowledge boundaries, enabling joint visual–linguistic document filtering during retrieval—thereby eliminating noise and irrelevant interference commonly introduced by external databases in conventional RAG. By modeling cross-modal knowledge boundaries and performing adaptive retrieval augmentation, our approach significantly improves answer relevance and factual reliability. Extensive experiments demonstrate state-of-the-art performance on E-VQA and InfoSeek: we achieve a +4.2% absolute gain in single-hop visual question answering accuracy, and up to +8.2% improvement on unseen questions/entities subsets. Moreover, the framework exhibits markedly enhanced robustness and generalization capability.

Technology Category

Application Category

📝 Abstract
Recent advancements in large language models (LLMs) and multi-modal LLMs have been remarkable. However, these models still rely solely on their parametric knowledge, which limits their ability to generate up-to-date information and increases the risk of producing erroneous content. Retrieval-Augmented Generation (RAG) partially mitigates these challenges by incorporating external data sources, yet the reliance on databases and retrieval systems can introduce irrelevant or inaccurate documents, ultimately undermining both performance and reasoning quality. In this paper, we propose Multi-Modal Knowledge-Based Retrieval-Augmented Generation (MMKB-RAG), a novel multi-modal RAG framework that leverages the inherent knowledge boundaries of models to dynamically generate semantic tags for the retrieval process. This strategy enables the joint filtering of retrieved documents, retaining only the most relevant and accurate references. Extensive experiments on knowledge-based visual question-answering tasks demonstrate the efficacy of our approach: on the E-VQA dataset, our method improves performance by +4.2% on the Single-Hop subset and +0.4% on the full dataset, while on the InfoSeek dataset, it achieves gains of +7.8% on the Unseen-Q subset, +8.2% on the Unseen-E subset, and +8.1% on the full dataset. These results highlight significant enhancements in both accuracy and robustness over the current state-of-the-art MLLM and RAG frameworks.
Problem

Research questions and friction points this paper is trying to address.

Enhancing accuracy in multi-modal knowledge retrieval
Reducing irrelevant documents in RAG frameworks
Improving performance in visual question-answering tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-modal RAG framework with dynamic semantic tags
Joint filtering for relevant and accurate documents
Improves performance on knowledge-based VQA tasks
🔎 Similar Papers
No similar papers found.
Z
Zihan Ling
Peking University, China
Z
Zhiyao Guo
Alibaba Group, China
Y
Yixuan Huang
Alibaba Group, China
Y
Yi An
Peking University, China
Shuai Xiao
Shuai Xiao
Alibaba Group
Machine LearningArtificial IntelligenceInformation RetrievalMultimodal Models
J
Jinsong Lan
Alibaba Group, China
Xiaoyong Zhu
Xiaoyong Zhu
Jiangsu University
Electrical MachinesElectrical Vehicle
B
Bo Zheng
Alibaba Group, China