Debate over Mixed-knowledge: A Robust Multi-Agent Framework for Incomplete Knowledge Graph Question Answering

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-world knowledge graphs (KGs) suffer from irregular incompleteness, posing significant challenges for incomplete KG question answering (IKGQA)—particularly in integrating structured and unstructured knowledge and mitigating dataset bias due to unrealistic incompleteness simulation. To address these issues, we propose DoM, a multi-agent debate framework that decomposes questions into subproblems, retrieves evidence via dual channels (KG and textual corpora), and orchestrates collaborative reasoning through a judge agent to dynamically fuse heterogeneous knowledge sources. Crucially, DoM introduces a novel multi-agent debate mechanism to enable knowledge complementarity and resolve contradictions. We further construct Incomplete KG WebQuestions—the first benchmark reflecting realistic, non-uniform KG evolution patterns. Extensive experiments demonstrate that DoM consistently outperforms state-of-the-art methods on both existing and new benchmarks, with superior robustness and generalization across diverse incompleteness scenarios.

Technology Category

Application Category

📝 Abstract
Knowledge Graph Question Answering (KGQA) aims to improve factual accuracy by leveraging structured knowledge. However, real-world Knowledge Graphs (KGs) are often incomplete, leading to the problem of Incomplete KGQA (IKGQA). A common solution is to incorporate external data to fill knowledge gaps, but existing methods lack the capacity to adaptively and contextually fuse multiple sources, failing to fully exploit their complementary strengths. To this end, we propose Debate over Mixed-knowledge (DoM), a novel framework that enables dynamic integration of structured and unstructured knowledge for IKGQA. Built upon the Multi-Agent Debate paradigm, DoM assigns specialized agents to perform inference over knowledge graphs and external texts separately, and coordinates their outputs through iterative interaction. It decomposes the input question into sub-questions, retrieves evidence via dual agents (KG and Retrieval-Augmented Generation, RAG), and employs a judge agent to evaluate and aggregate intermediate answers. This collaboration exploits knowledge complementarity and enhances robustness to KG incompleteness. In addition, existing IKGQA datasets simulate incompleteness by randomly removing triples, failing to capture the irregular and unpredictable nature of real-world knowledge incompleteness. To address this, we introduce a new dataset, Incomplete Knowledge Graph WebQuestions, constructed by leveraging real-world knowledge updates. These updates reflect knowledge beyond the static scope of KGs, yielding a more realistic and challenging benchmark. Through extensive experiments, we show that DoM consistently outperforms state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses incomplete knowledge graphs in question answering systems
Enables dynamic integration of structured and unstructured knowledge sources
Introduces realistic benchmark for knowledge incompleteness using real-world updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic integration of structured and unstructured knowledge
Multi-agent debate paradigm with specialized inference agents
Novel dataset reflecting real-world knowledge incompleteness patterns
🔎 Similar Papers
No similar papers found.