WildGraphBench: Benchmarking GraphRAG with Wild-Source Corpora

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GraphRAG evaluation benchmarks predominantly rely on short texts and curated corpora, failing to capture the challenges posed by long-context inputs and heterogeneous large-scale documents in real-world scenarios. This work proposes WildGraphBench, a novel benchmark comprising 1,100 questions constructed from Wikipedia articles and their external references, encompassing three multi-granularity tasks: single-fact QA, multi-fact QA, and paragraph-level summarization. Evidence graphs are built using citation links to support complex reasoning. Evaluation on this benchmark reveals that current GraphRAG systems, while effective at aggregating medium-scale multi-source evidence, struggle with fine-grained information retention—particularly in summarization tasks, where an overemphasis on high-level statements leads to the omission of critical details, significantly degrading performance.

Technology Category

Application Category

📝 Abstract
Graph-based Retrieval-Augmented Generation (GraphRAG) organizes external knowledge as a hierarchical graph, enabling efficient retrieval and aggregation of scattered evidence across multiple documents. However, many existing benchmarks for GraphRAG rely on short, curated passages as external knowledge, failing to adequately evaluate systems in realistic settings involving long contexts and large-scale heterogeneous documents. To bridge this gap, we introduce WildGraphBench, a benchmark designed to assess GraphRAG performance in the wild. We leverage Wikipedia's unique structure, where cohesive narratives are grounded in long and heterogeneous external reference documents, to construct a benchmark reflecting real-word scenarios. Specifically, we sample articles across 12 top-level topics, using their external references as the retrieval corpus and citation-linked statements as ground truth, resulting in 1,100 questions spanning three levels of complexity: single-fact QA, multi-fact QA, and section-level summarization. Experiments across multiple baselines reveal that current GraphRAG pipelines help on multi-fact aggregation when evidence comes from a moderate number of sources, but this aggregation paradigm may overemphasize high-level statements at the expense of fine-grained details, leading to weaker performance on summarization tasks. Project page:https://github.com/BstWPY/WildGraphBench.
Problem

Research questions and friction points this paper is trying to address.

GraphRAG
benchmark
heterogeneous documents
long-context retrieval
realistic evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

GraphRAG
benchmark
heterogeneous documents
retrieval-augmented generation
real-world evaluation
🔎 Similar Papers
No similar papers found.
P
Pengyu Wang
University of Science and Technology of China, Hefei, China
Benfeng Xu
Benfeng Xu
University of Science and Technology of China
Natural Language ProcessingLarge Language ModelsInformation Extraction
L
L. Zhang
University of Science and Technology of China, Hefei, China
S
Shaohan Wang
University of Science and Technology of China, Hefei, China
M
Mingxuan Du
University of Science and Technology of China, Hefei, China
Chiwei Zhu
Chiwei Zhu
University of Science and Technology of China
Post Training of LLMs
Zhendong Mao
Zhendong Mao
University of Science and Technology of China
CV,NLP