π€ AI Summary
In Retrieval-Augmented Generation (RAG) systems, insufficient coordination between retrievers and generators often leads to document redundancy, suboptimal evidence utilization, and unreliable reasoning. To address this, we propose SIRAGβa process-supervised, multi-agent RAG framework comprising a lightweight decision agent and a knowledge selection agent. It employs LLM-as-a-Judge for fine-grained reward evaluation of intermediate actions and leverages tree-structured rollouts to enable multi-path reasoning exploration. SIRAG is plug-and-play and modular, requiring no modification to existing retrievers or generators. It is trained end-to-end via Proximal Policy Optimization (PPO), significantly improving accuracy on both single-hop and multi-hop question answering. Empirical results demonstrate enhanced system stability, improved convergence robustness, and greater interpretability of the reasoning process.
π Abstract
Retrieval-Augmented Generation (RAG) enables large language models (LLMs) to access external knowledge sources, but the effectiveness of RAG relies on the coordination between the retriever and the generator. Since these components are developed independently, their interaction is often suboptimal: the retriever may return irrelevant or redundant documents, while the generator may fail to fully leverage retrieved evidence. In this work, we propose a process-supervised multi-agent framework to bridge the gap between retriever and generator. The framework introduces two lightweight agents: a Decision Maker, which determines when to continue retrieval or stop for answer generation, and a Knowledge Selector, which filters retrieved documents to retain only the most useful evidence. To provide fine-grained supervision, we employ an LLM-as-a-Judge that evaluates each intermediate action with process-level rewards, ensuring more accurate credit assignment than relying solely on final answer correctness. We further adopt a tree-structured rollout strategy to explore diverse reasoning paths, and train both agents with Proximal Policy Optimization (PPO) in an end-to-end manner. Experiments on single-hop and multi-hop question answering benchmarks show that our approach achieves higher accuracy, more stable convergence, and produces more interpretable reasoning trajectories compared with standard RAG baselines. Importantly, the proposed framework is modular and plug-and-play, requiring no modification to the retriever or generator, making it practical for real-world RAG applications.