Pruning Minimal Reasoning Graphs for Efficient Retrieval-Augmented Generation

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes AutoPrunedRetriever, a graph-structured retrieval-augmented generation (RAG) system that addresses the high computational overhead and latency of conventional RAG approaches caused by redundant retrieval and full re-reasoning. By persisting and incrementally expanding a minimal reasoning subgraph, the method replaces raw text with a symbolic graph for both retrieval and prompting. It introduces a two-tier graph compression and pruning mechanism that leverages an ID-based index codebook, ANN/KNN-based alias detection, and selective k-means clustering to efficiently compress entity-relation graphs, integrating either REBEL or LLM-extracted triples. Evaluated on complex reasoning benchmarks including GraphRAG-Benchmark, STEM, and TV, the approach achieves state-of-the-art performance, improving accuracy by 9–11 percentage points over HippoRAG2 while reducing token consumption by up to two orders of magnitude.

Technology Category

Application Category

📝 Abstract
Retrieval-augmented generation (RAG) is now standard for knowledge-intensive LLM tasks, but most systems still treat every query as fresh, repeatedly re-retrieving long passages and re-reasoning from scratch, inflating tokens, latency, and cost. We present AutoPrunedRetriever, a graph-style RAG system that persists the minimal reasoning subgraph built for earlier questions and incrementally extends it for later ones. AutoPrunedRetriever stores entities and relations in a compact, ID-indexed codebook and represents questions, facts, and answers as edge sequences, enabling retrieval and prompting over symbolic structure instead of raw text. To keep the graph compact, we apply a two-layer consolidation policy (fast ANN/KNN alias detection plus selective $k$-means once a memory threshold is reached) and prune low-value structure, while prompts retain only overlap representatives and genuinely new evidence. We instantiate two front ends: AutoPrunedRetriever-REBEL, which uses REBEL as a triplet parser, and AutoPrunedRetriever-llm, which swaps in an LLM extractor. On GraphRAG-Benchmark (Medical and Novel), both variants achieve state-of-the-art complex reasoning accuracy, improving over HippoRAG2 by roughly 9--11 points, and remain competitive on contextual summarize and generation. On our harder STEM and TV benchmarks, AutoPrunedRetriever again ranks first, while using up to two orders of magnitude fewer tokens than graph-heavy baselines, making it a practical substrate for long-running sessions, evolving corpora, and multi-agent pipelines.
Problem

Research questions and friction points this paper is trying to address.

Retrieval-Augmented Generation
reasoning efficiency
token cost
latency
knowledge-intensive tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented Generation
Reasoning Graph Pruning
Incremental Knowledge Graph
Symbolic Retrieval
Efficient LLM Reasoning
🔎 Similar Papers
No similar papers found.
N
Ning Wang
Cornell University
K
Kuanyan Zhu
University of Cambridge
D
Daniel Yuehwoon Yee
The University of Hong Kong
Y
Yitang Gao
HKUST
S
Shiying Huang
Cornell University
Z
Zirun Xu
University of British Columbia
Sainyam Galhotra
Sainyam Galhotra
Cornell University
Data IntegrationAlgorithmsResponsible Data Science