SIRAG: Towards Stable and Interpretable RAG with A Process-Supervised Multi-Agent Framework

πŸ“… 2025-09-17
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In Retrieval-Augmented Generation (RAG) systems, insufficient coordination between retrievers and generators often leads to document redundancy, suboptimal evidence utilization, and unreliable reasoning. To address this, we propose SIRAGβ€”a process-supervised, multi-agent RAG framework comprising a lightweight decision agent and a knowledge selection agent. It employs LLM-as-a-Judge for fine-grained reward evaluation of intermediate actions and leverages tree-structured rollouts to enable multi-path reasoning exploration. SIRAG is plug-and-play and modular, requiring no modification to existing retrievers or generators. It is trained end-to-end via Proximal Policy Optimization (PPO), significantly improving accuracy on both single-hop and multi-hop question answering. Empirical results demonstrate enhanced system stability, improved convergence robustness, and greater interpretability of the reasoning process.

Technology Category

Application Category

πŸ“ Abstract
Retrieval-Augmented Generation (RAG) enables large language models (LLMs) to access external knowledge sources, but the effectiveness of RAG relies on the coordination between the retriever and the generator. Since these components are developed independently, their interaction is often suboptimal: the retriever may return irrelevant or redundant documents, while the generator may fail to fully leverage retrieved evidence. In this work, we propose a process-supervised multi-agent framework to bridge the gap between retriever and generator. The framework introduces two lightweight agents: a Decision Maker, which determines when to continue retrieval or stop for answer generation, and a Knowledge Selector, which filters retrieved documents to retain only the most useful evidence. To provide fine-grained supervision, we employ an LLM-as-a-Judge that evaluates each intermediate action with process-level rewards, ensuring more accurate credit assignment than relying solely on final answer correctness. We further adopt a tree-structured rollout strategy to explore diverse reasoning paths, and train both agents with Proximal Policy Optimization (PPO) in an end-to-end manner. Experiments on single-hop and multi-hop question answering benchmarks show that our approach achieves higher accuracy, more stable convergence, and produces more interpretable reasoning trajectories compared with standard RAG baselines. Importantly, the proposed framework is modular and plug-and-play, requiring no modification to the retriever or generator, making it practical for real-world RAG applications.
Problem

Research questions and friction points this paper is trying to address.

Optimizing coordination between retriever and generator in RAG systems
Reducing irrelevant documents and improving evidence utilization
Providing fine-grained supervision for intermediate reasoning steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework coordinates retriever and generator
Process-level rewards from LLM-as-a-Judge supervise agents
Tree-structured rollout and PPO enable end-to-end training
πŸ”Ž Similar Papers
No similar papers found.
Junlin Wang
Junlin Wang
Duke University
Computer ScienceNLP
Z
Zehao Wu
School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
S
Shaowei Lu
Heyuan Tobacco Monopoly Administration, Xinyuan Road, Yuancheng District, Heyuan, China
Y
Yanlan Li
Heyuan Tobacco Monopoly Administration, Xinyuan Road, Yuancheng District, Heyuan, China
Xinghao Huang
Xinghao Huang
Master of Mechanical Engineering, Tsinghua University
Battery-Health Time-Series