🤖 AI Summary
This work addresses the problem that LLM-generated knowledge graphs (KGs) introduce redundant entities and erroneous relations in graph-based retrieval-augmented generation (RAG), degrading retrieval/generation performance and increasing computational overhead. We propose the first systematic KG denoising framework. Our method comprises two core components: (1) a fine-grained entity resolution mechanism leveraging embedding similarity and multi-strategy blocking; and (2) a triple filtering mechanism integrating semantic consistency evaluation with reflective re-scoring. Experiments across multiple state-of-the-art graph-enhanced RAG models demonstrate that our approach significantly improves question-answering accuracy (average +4.2%), compresses KG size by 38.7%, and reduces inference latency by 22.5%. The framework establishes a scalable, high-fidelity denoising paradigm for LLM-driven KG construction.
📝 Abstract
Retrieval-Augmented Generation (RAG) systems enable large language models (LLMs) instant access to relevant information for the generative process, demonstrating their superior performance in addressing common LLM challenges such as hallucination, factual inaccuracy, and the knowledge cutoff. Graph-based RAG further extends this paradigm by incorporating knowledge graphs (KGs) to leverage rich, structured connections for more precise and inferential responses. A critical challenge, however, is that most Graph-based RAG systems rely on LLMs for automated KG construction, often yielding noisy KGs with redundant entities and unreliable relationships. This noise degrades retrieval and generation performance while also increasing computational cost. Crucially, current research does not comprehensively address the denoising problem for LLM-generated KGs. In this paper, we introduce DEnoised knowledge Graphs for Retrieval Augmented Generation (DEG-RAG), a framework that addresses these challenges through: (1) entity resolution, which eliminates redundant entities, and (2) triple reflection, which removes erroneous relations. Together, these techniques yield more compact, higher-quality KGs that significantly outperform their unprocessed counterparts. Beyond the methods, we conduct a systematic evaluation of entity resolution for LLM-generated KGs, examining different blocking strategies, embedding choices, similarity metrics, and entity merging techniques. To the best of our knowledge, this is the first comprehensive exploration of entity resolution in LLM-generated KGs. Our experiments demonstrate that this straightforward approach not only drastically reduces graph size but also consistently improves question answering performance across diverse popular Graph-based RAG variants.