🤖 AI Summary
Traditional knowledge graph question answering (KGQA) methods suffer from rigid schema constraints and semantic ambiguity, resulting in limited knowledge coverage and weak multi-hop reasoning capabilities. To address these limitations, we propose a knowledge graph-enhanced retrieval-augmented generation (KG-RAG) framework. Our approach replaces brittle semantic parsing with a flexible subgraph retrieval mechanism, integrated with subgraph filtering, information summarization, and large language model (LLM)-fine-tuned chain-of-thought reasoning to suppress noise and improve accuracy. This end-to-end pipeline balances precision and generalizability, outperforming state-of-the-art methods by approximately 7% across multiple benchmarks and achieving 10–21% higher answer quality than GPT-4o (Tool). The core innovation lies in decoupling knowledge retrieval from schema dependency—thereby substantially expanding coverage and strengthening complex, multi-hop reasoning.
📝 Abstract
Retrieval-Augmented Generation (RAG) mitigates hallucination in Large Language Models (LLMs) by incorporating external data, with Knowledge Graphs (KGs) offering crucial information for question answering. Traditional Knowledge Graph Question Answering (KGQA) methods rely on semantic parsing, which typically retrieves knowledge strictly necessary for answer generation, thus often suffer from low coverage due to rigid schema requirements and semantic ambiguity. We present KERAG, a novel KG-based RAG pipeline that enhances QA coverage by retrieving a broader subgraph likely to contain relevant information. Our retrieval-filtering-summarization approach, combined with fine-tuned LLMs for Chain-of-Thought reasoning on knowledge sub-graphs, reduces noises and improves QA for both simple and complex questions. Experiments demonstrate that KERAG surpasses state-of-the-art solutions by about 7% in quality and exceeds GPT-4o (Tool) by 10-21%.