You Don't Need Pre-built Graphs for RAG: Retrieval Augmented Generation with Adaptive Reasoning Structures

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing GraphRAG approaches rely on pre-constructed knowledge graphs, incurring high annotation costs, suffering from update latency, and failing to adapt their static graph structures to the diverse logical requirements of user queries—leading to inefficient retrieval. This paper proposes LogicRAG, a dynamic retrieval-augmented generation framework that eliminates the need for pre-built graphs. Its core innovation is on-the-fly construction of logic-aware directed acyclic graphs (DAGs) during inference, achieved via query decomposition, topological sorting for linearization, dynamic graph generation, graph pruning, and context-aware filtering—enabling adaptive, multi-step, logically consistent reasoning. LogicRAG significantly reduces token consumption and retrieval overhead while outperforming state-of-the-art methods on complex question-answering benchmarks. It achieves a favorable trade-off between reasoning accuracy and computational efficiency.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) often suffer from hallucination, generating factually incorrect statements when handling questions beyond their knowledge and perception. Retrieval-augmented generation (RAG) addresses this by retrieving query-relevant contexts from knowledge bases to support LLM reasoning. Recent advances leverage pre-constructed graphs to capture the relational connections among distributed documents, showing remarkable performance in complex tasks. However, existing Graph-based RAG (GraphRAG) methods rely on a costly process to transform the corpus into a graph, introducing overwhelming token cost and update latency. Moreover, real-world queries vary in type and complexity, requiring different logic structures for accurate reasoning. The pre-built graph may not align with these required structures, resulting in ineffective knowledge retrieval. To this end, we propose a extbf{underline{Logic}}-aware extbf{underline{R}}etrieval- extbf{underline{A}}ugmented extbf{underline{G}}eneration framework ( extbf{LogicRAG}) that dynamically extracts reasoning structures at inference time to guide adaptive retrieval without any pre-built graph. LogicRAG begins by decomposing the input query into a set of subproblems and constructing a directed acyclic graph (DAG) to model the logical dependencies among them. To support coherent multi-step reasoning, LogicRAG then linearizes the graph using topological sort, so that subproblems can be addressed in a logically consistent order. Besides, LogicRAG applies graph pruning to reduce redundant retrieval and uses context pruning to filter irrelevant context, significantly reducing the overall token cost. Extensive experiments demonstrate that LogicRAG achieves both superior performance and efficiency compared to state-of-the-art baselines.
Problem

Research questions and friction points this paper is trying to address.

LLMs hallucinate without external knowledge support
Pre-built graphs in RAG are costly and inflexible
Dynamic reasoning structures improve retrieval accuracy and efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic reasoning structures without pre-built graphs
Query decomposition into logical subproblem DAG
Graph pruning and context filtering reduce costs
🔎 Similar Papers
No similar papers found.