VAT-KG: Knowledge-Intensive Multimodal Knowledge Graph Dataset for Retrieval-Augmented Generation

📅 2025-06-11
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal knowledge graphs (MMKGs) suffer from narrow knowledge coverage, outdated updates, and limited modality support—primarily restricted to text and images—thereby hindering their generalization in cross-modal tasks. To address these limitations, we propose VAT-KG, the first vision-audio-text tri-modal, concept-centric knowledge graph, where each triple is grounded in raw multimodal data and annotated with fine-grained semantic descriptions. Methodologically, we introduce: (1) the first concept-level multimodal knowledge alignment framework; (2) a generalizable, automated MMKG construction pipeline; and (3) a fine-grained retrieval-augmented generation (RAG) framework supporting arbitrary-modal queries. Experiments demonstrate that VAT-KG significantly enhances multimodal large language models’ (MLLMs) performance on multimodal question answering. It achieves substantial advances in knowledge breadth, temporal freshness, and modality completeness—setting a new benchmark for tri-modal knowledge representation and reasoning.

Technology Category

Application Category

📝 Abstract
Multimodal Knowledge Graphs (MMKGs), which represent explicit knowledge across multiple modalities, play a pivotal role by complementing the implicit knowledge of Multimodal Large Language Models (MLLMs) and enabling more grounded reasoning via Retrieval Augmented Generation (RAG). However, existing MMKGs are generally limited in scope: they are often constructed by augmenting pre-existing knowledge graphs, which restricts their knowledge, resulting in outdated or incomplete knowledge coverage, and they often support only a narrow range of modalities, such as text and visual information. These limitations reduce their extensibility and applicability to a broad range of multimodal tasks, particularly as the field shifts toward richer modalities such as video and audio in recent MLLMs. Therefore, we propose the Visual-Audio-Text Knowledge Graph (VAT-KG), the first concept-centric and knowledge-intensive multimodal knowledge graph that covers visual, audio, and text information, where each triplet is linked to multimodal data and enriched with detailed descriptions of concepts. Specifically, our construction pipeline ensures cross-modal knowledge alignment between multimodal data and fine-grained semantics through a series of stringent filtering and alignment steps, enabling the automatic generation of MMKGs from any multimodal dataset. We further introduce a novel multimodal RAG framework that retrieves detailed concept-level knowledge in response to queries from arbitrary modalities. Experiments on question answering tasks across various modalities demonstrate the effectiveness of VAT-KG in supporting MLLMs, highlighting its practical value in unifying and leveraging multimodal knowledge.
Problem

Research questions and friction points this paper is trying to address.

Addressing outdated knowledge coverage in multimodal knowledge graphs
Overcoming modality limitations in existing multimodal knowledge graphs
Enhancing multimodal reasoning through concept-centric knowledge alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

First concept-centric multimodal knowledge graph
Automatic generation pipeline with cross-modal alignment
Novel multimodal retrieval framework for arbitrary queries
🔎 Similar Papers
No similar papers found.
H
Hyeongcheol Park
Korea University
M
MinHyuk Jang
Korea University
H
Hadam Baek
Korea University
G
Gyusam Chang
Korea University
J
Jiyoung Seo
Korea University
J
Jiwan Park
Hogun Park
Hogun Park
Associate professor, Sungkyunkwan University (SKKU)
Data MiningMachine LearningExplainable AIGraph LearningNatural Language Processing
Sangpil Kim
Sangpil Kim
Korea University
Computer Vision