🤖 AI Summary
To address the limitations of conventional RAG—namely, the absence of semantic structure, inflexible retrieval, and high computational cost in graph construction—this paper proposes an agent-based graph retrieval framework trained end-to-end via reinforcement learning. Our approach makes three key contributions: (1) a lightweight knowledge hypergraph replaces traditional entity-relation graphs, drastically reducing construction overhead; (2) retrieval is formulated as a multi-step agent–environment interaction process, enabling dynamic, fine-grained exploration of graph topology; and (3) an end-to-end differentiable reward mechanism is introduced, eliminating reliance on long-context modeling and manual prompt engineering. Extensive experiments on standard RAG benchmarks demonstrate that our method consistently outperforms state-of-the-art graph-augmented RAG and reinforcement learning–enhanced approaches across inference accuracy, retrieval efficiency, and generation quality.
📝 Abstract
Retrieval-Augmented Generation (RAG) mitigates hallucination in LLMs by incorporating external knowledge, but relies on chunk-based retrieval that lacks structural semantics. GraphRAG methods improve RAG by modeling knowledge as entity-relation graphs, but still face challenges in high construction cost, fixed one-time retrieval, and reliance on long-context reasoning and prompt design. To address these challenges, we propose Graph-R1, an agentic GraphRAG framework via end-to-end reinforcement learning (RL). It introduces lightweight knowledge hypergraph construction, models retrieval as a multi-turn agent-environment interaction, and optimizes the agent process via an end-to-end reward mechanism. Experiments on standard RAG datasets show that Graph-R1 outperforms traditional GraphRAG and RL-enhanced RAG methods in reasoning accuracy, retrieval efficiency, and generation quality.