From Local to Global: A Graph RAG Approach to Query-Focused Summarization

πŸ“… 2024-04-24
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 160
✨ Influential: 25
πŸ“„ PDF
πŸ€– AI Summary
Existing RAG methods struggle with queries requiring global semantic understanding (e.g., β€œWhat is the primary theme of the dataset?”), while query-focused summarization (QFS) lacks scalability to large-scale corpora. To address this, we propose Graph-QFS, a graph-enhanced query-focused summarization framework. It introduces a novel two-stage graph indexing paradigm: first, an LLM extracts an entity-level knowledge graph from raw text; second, community detection generates hierarchical, semantically cohesive community summaries. By unifying RAG’s retrieval capability with QFS’s abstraction strength, Graph-QFS enables end-to-end global understanding over private corpora exceeding one million tokens. Experiments demonstrate that Graph-QFS significantly outperforms conventional RAG in answer comprehensiveness and diversity, while maintaining strong scalability and deep semantic integration.

Technology Category

Application Category

πŸ“ Abstract
The use of retrieval-augmented generation (RAG) to retrieve relevant information from an external knowledge source enables large language models (LLMs) to answer questions over private and/or previously unseen document collections. However, RAG fails on global questions directed at an entire text corpus, such as"What are the main themes in the dataset?", since this is inherently a query-focused summarization (QFS) task, rather than an explicit retrieval task. Prior QFS methods, meanwhile, do not scale to the quantities of text indexed by typical RAG systems. To combine the strengths of these contrasting methods, we propose GraphRAG, a graph-based approach to question answering over private text corpora that scales with both the generality of user questions and the quantity of source text. Our approach uses an LLM to build a graph index in two stages: first, to derive an entity knowledge graph from the source documents, then to pregenerate community summaries for all groups of closely related entities. Given a question, each community summary is used to generate a partial response, before all partial responses are again summarized in a final response to the user. For a class of global sensemaking questions over datasets in the 1 million token range, we show that GraphRAG leads to substantial improvements over a conventional RAG baseline for both the comprehensiveness and diversity of generated answers.
Problem

Research questions and friction points this paper is trying to address.

Global question summarization
Graph-based QFS scaling
Entity knowledge graph generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-based RAG approach
Entity knowledge graph construction
Community summaries pregeneration
πŸ”Ž Similar Papers
No similar papers found.
D
Darren Edge
Microsoft Research
Ha Trinh
Ha Trinh
Data Scientist, Microsoft
Artificial IntelligenceHuman-Computer Interaction
N
Newman Cheng
Microsoft Strategic Missions and Technologies
J
Joshua Bradley
Microsoft Strategic Missions and Technologies
A
Alex Chao
Microsoft Office of the CTO
A
Apurva Mody
Microsoft Office of the CTO
S
Steven Truitt
Microsoft Strategic Missions and Technologies
Jonathan Larson
Jonathan Larson
Microsoft Research
network machine learning