🤖 AI Summary
Existing KG-RAG methods suffer from cognitive blind spots, failing to detect flaws in retrieval paths—leading to relevance drift and incomplete evidence. To address this, we propose MetaKGRAG, the first KG-RAG framework integrating metacognitive mechanisms to establish a closed-loop “perceive–evaluate–adjust” process for self-reflective and dynamic refinement of knowledge graph retrieval paths. Our approach jointly incorporates path-dependency modeling, trajectory-connected correction, multi-hop structured reasoning, and an intrinsic self-assessment mechanism. Evaluated on five benchmark datasets spanning medical, legal, and commonsense reasoning domains, MetaKGRAG consistently outperforms state-of-the-art KG-RAG and self-refinement baselines. Results demonstrate significant improvements in reasoning accuracy, evidence completeness, and cross-domain generalization capability.
📝 Abstract
Knowledge Graph-based Retrieval-Augmented Generation (KG-RAG) significantly enhances the reasoning capabilities of LargeLanguage Models by leveraging structured knowledge. However, existing KG-RAG frameworks typically operate as open-loop systems, suffering from cognitive blindness, an inability to recognize their exploration deficiencies. This leads to relevance drift and incomplete evidence, which existing self-refinement methods, designed for unstructured text-based RAG, cannot effectively resolve due to the path-dependent nature of graph exploration. To address this challenge, we propose Metacognitive Knowledge Graph Retrieval Augmented Generation (MetaKGRAG), a novel framework inspired by the human metacognition process, which introduces a Perceive-Evaluate-Adjust cycle to enable path-aware, closed-loop refinement. This cycle empowers the system to self-assess exploration quality, identify deficiencies in coverage or relevance, and perform trajectory-connected corrections from precise pivot points. Extensive experiments across five datasets in the medical, legal, and commonsense reasoning domains demonstrate that MetaKGRAG consistently outperforms strong KG-RAG and self-refinement baselines. Our results validate the superiority of our approach and highlight the critical need for path-aware refinement in structured knowledge retrieval.