🤖 AI Summary
Existing KGQA benchmarks assume complete knowledge graphs (KGs) and evaluate only shallow, single-hop triplet retrieval, overlooking the practical challenge of answering questions over incomplete KGs that require multi-hop reasoning to infer missing facts. Method: We propose the first KGQA framework tailored for incomplete KGs, modeling question answering as an interactive, agent-based reasoning process over graph-structured environments. Our approach employs reinforcement learning to dynamically orchestrate composable graph reasoning operators—including neighbor expansion, path aggregation, and relation inversion—while maintaining a memory of multi-hop relational paths as evidence. Contribution/Results: We introduce a novel benchmark for KGQA on incomplete KGs and design a zero-shot, environment-interaction–driven reasoning agent with memory augmentation. Experiments show our method significantly outperforms non-training baselines on our benchmark, matches supervised models in performance, and maintains robustness across both complete and incomplete KG settings.
📝 Abstract
Large language models (LLMs) achieve strong results on knowledge graph question answering (KGQA), but most benchmarks assume complete knowledge graphs (KGs) where direct supporting triples exist. This reduces evaluation to shallow retrieval and overlooks the reality of incomplete KGs, where many facts are missing and answers must be inferred from existing facts. We bridge this gap by proposing a methodology for constructing benchmarks under KG incompleteness, which removes direct supporting triples while ensuring that alternative reasoning paths required to infer the answer remain. Experiments on benchmarks constructed using our methodology show that existing methods suffer consistent performance degradation under incompleteness, highlighting their limited reasoning ability. To overcome this limitation, we present the Adaptive Graph Reasoning Agent (GR-Agent). It first constructs an interactive environment from the KG, and then formalizes KGQA as agent environment interaction within this environment. GR-Agent operates over an action space comprising graph reasoning tools and maintains a memory of potential supporting reasoning evidence, including relevant relations and reasoning paths. Extensive experiments demonstrate that GR-Agent outperforms non-training baselines and performs comparably to training-based methods under both complete and incomplete settings.