Empowering GraphRAG with Knowledge Filtering and Integration

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
GraphRAG suffers from two key limitations: (1) retrieval noise degrading answer quality, and (2) over-reliance on external knowledge undermining the model’s intrinsic reasoning capabilities. To address these, we propose GraphRAG-FI—a framework featuring a two-stage graph knowledge filtering mechanism that suppresses irrelevant and low-relevance knowledge at both the subgraph and node levels. Furthermore, we introduce a logits-driven dynamic integration strategy that adaptively weights internal reasoning (from the LLM’s inherent parameters) and external graph knowledge based on output-layer logits—marking the first such approach in KGQA. Evaluated across multiple knowledge graph question answering (KGQA) benchmarks, GraphRAG-FI significantly improves reasoning accuracy, maintains compatibility with mainstream LLM backbones, reduces hallucination rates by over 35%, and enhances both system robustness and decision interpretability.

Technology Category

Application Category

📝 Abstract
In recent years, large language models (LLMs) have revolutionized the field of natural language processing. However, they often suffer from knowledge gaps and hallucinations. Graph retrieval-augmented generation (GraphRAG) enhances LLM reasoning by integrating structured knowledge from external graphs. However, we identify two key challenges that plague GraphRAG:(1) Retrieving noisy and irrelevant information can degrade performance and (2)Excessive reliance on external knowledge suppresses the model's intrinsic reasoning. To address these issues, we propose GraphRAG-FI (Filtering and Integration), consisting of GraphRAG-Filtering and GraphRAG-Integration. GraphRAG-Filtering employs a two-stage filtering mechanism to refine retrieved information. GraphRAG-Integration employs a logits-based selection strategy to balance external knowledge from GraphRAG with the LLM's intrinsic reasoning,reducing over-reliance on retrievals. Experiments on knowledge graph QA tasks demonstrate that GraphRAG-FI significantly improves reasoning performance across multiple backbone models, establishing a more reliable and effective GraphRAG framework.
Problem

Research questions and friction points this paper is trying to address.

Addresses noisy and irrelevant information retrieval in GraphRAG.
Reduces over-reliance on external knowledge in LLM reasoning.
Improves reasoning performance in knowledge graph QA tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage filtering mechanism refines retrieved information
Logits-based selection balances external and intrinsic knowledge
Reduces over-reliance on external knowledge retrievals
🔎 Similar Papers
No similar papers found.