REX-RAG: Reasoning Exploration with Policy Correction in Retrieval-Augmented Generation

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the problem of large language models (LLMs) in retrieval-augmented generation (RAG) getting trapped in unproductive reasoning “dead ends,” leading to overconfident errors and insufficient exploration, this paper proposes a reinforcement learning (RL)-based framework for optimizing reasoning paths. Our method unifies dynamic knowledge retrieval, RAG, and RL to enable robust reasoning exploration and policy updating. Key contributions include: (1) a hybrid sampling strategy that combines probing sampling with exploratory prompting to actively escape erroneous reasoning trajectories; and (2) a policy correction mechanism leveraging importance sampling to mitigate distributional shift and ensure stable policy learning. Evaluated on seven open-domain question-answering benchmarks, our approach achieves average improvements of 5.1% and 3.6% in answer accuracy with Qwen2.5-3B and Qwen2.5-7B, respectively—outperforming strong baselines and demonstrating both effectiveness and generalizability.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) is emerging as a powerful paradigm for enabling large language models (LLMs) to perform complex reasoning tasks. Recent advances indicate that integrating RL with retrieval-augmented generation (RAG) allows LLMs to dynamically incorporate external knowledge, leading to more informed and robust decision making. However, we identify a critical challenge during policy-driven trajectory sampling: LLMs are frequently trapped in unproductive reasoning paths, which we refer to as "dead ends", committing to overconfident yet incorrect conclusions. This severely hampers exploration and undermines effective policy optimization. To address this challenge, we propose REX-RAG (Reasoning Exploration with Policy Correction in Retrieval-Augmented Generation), a novel framework that explores alternative reasoning paths while maintaining rigorous policy learning through principled distributional corrections. Our approach introduces two key innovations: (1) Mixed Sampling Strategy, which combines a novel probe sampling method with exploratory prompts to escape dead ends; and (2) Policy Correction Mechanism, which employs importance sampling to correct distribution shifts induced by mixed sampling, thereby mitigating gradient estimation bias. We evaluate it on seven question-answering benchmarks, and the experimental results show that REX-RAG achieves average performance gains of 5.1% on Qwen2.5-3B and 3.6% on Qwen2.5-7B over strong baselines, demonstrating competitive results across multiple datasets. The code is publicly available at https://github.com/MiliLab/REX-RAG.
Problem

Research questions and friction points this paper is trying to address.

Prevents LLMs from getting stuck in unproductive reasoning paths
Addresses overconfident incorrect conclusions in policy-driven sampling
Enhances exploration and policy optimization in RAG systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixed Sampling Strategy to escape dead ends
Policy Correction Mechanism for distribution shifts
Reinforcement learning integrated with RAG
🔎 Similar Papers
No similar papers found.
W
Wentao Jiang
School of Computer Science, Wuhan University, China
Xiang Feng
Xiang Feng
ShanghaiTech University
Neural Radiance FieldsImage Super ResolutionComputer Vision
Zengmao Wang
Zengmao Wang
Associate Professor, School of Computer Science, Wuhan University
Artificial IntelligenceMachine LearningRemote Sensing
Yong Luo
Yong Luo
Wuhan University
Artifical IntelligenceMachine LearningData MiningPattern Classification and Search
P
Pingbo Xu
Department of Anesthesiology, Zhejiang Cancer Hospital, China; Institute of Medicine, Chinese Academy of Sciences, Hangzhou, Zhejiang, China
Z
Zhe Chen
Department of Computer Science and Information Technology, La Trobe University, Australia
Bo Du
Bo Du
Department of Management, Griffith Business School
Sustainable TransportTravel BehaviourUrban Data AnalyticsLogistics and Supply Chain
J
Jing Zhang
School of Computer Science, Wuhan University, China