UR$^2$: Unify RAG and Reasoning through Reinforcement Learning

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing RAG and RLVR paradigms separately enhance knowledge retrieval or complex reasoning, yet remain disjointed—leading to poor generalization and narrow task adaptability in unified approaches. To address this, we propose the first reinforcement learning–based general framework that dynamically coordinates retrieval and reasoning. Our method introduces a difficulty-aware curriculum training mechanism to trigger retrieval on demand; employs a hybrid knowledge access strategy integrating external retrieval with LLM-generated self-summaries; and end-to-end optimizes reasoning via verifiable reward signals. Evaluated on open-domain QA, MMLU-Pro, medical reasoning, and mathematical reasoning, our framework achieves performance competitive with GPT-4o-mini on Qwen2.5 and LLaMA-3.1. Crucially, it establishes the first organic unification of RAG and RLVR across diverse tasks, demonstrating superior cross-task generalization and robustness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown remarkable capabilities through two complementary paradigms: Retrieval-Augmented Generation (RAG), which enhances knowledge grounding, and Reinforcement Learning from Verifiable Rewards (RLVR), which optimizes complex reasoning abilities. However, these two capabilities are often developed in isolation, and existing efforts to unify them remain narrow in scope-typically limited to open-domain QA with fixed retrieval settings and task-specific assumptions. This lack of integration constrains generalization and limits the applicability of RAG-RL methods to broader domains. To bridge this gap, we propose UR2 (Unified RAG and Reasoning), a general framework that unifies retrieval and reasoning through reinforcement learning. UR2 introduces two key contributions: a difficulty-aware curriculum training that selectively invokes retrieval only for challenging problems, and a hybrid knowledge access strategy combining domain-specific offline corpora with LLM-generated summaries. These components are designed to enable dynamic coordination between retrieval and reasoning, improving adaptability across a diverse range of tasks. Experiments across open-domain QA, MMLU-Pro, medical, and mathematical reasoning tasks demonstrate that UR2 (built on Qwen2.5-3/7B and LLaMA-3.1-8B) significantly outperforms existing RAG and RL methods, achieving comparable performance to GPT-4o-mini and GPT-4.1-mini on several benchmarks. We have released all code, models, and data at https://github.com/Tsinghua-dhy/UR2.
Problem

Research questions and friction points this paper is trying to address.

Unifies retrieval and reasoning via reinforcement learning
Addresses narrow scope of existing RAG-RL integration methods
Enhances adaptability across diverse tasks dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unifies RAG and reasoning via reinforcement learning
Difficulty-aware curriculum training for selective retrieval
Hybrid knowledge access with offline and LLM-generated data
🔎 Similar Papers
No similar papers found.
Weitao Li
Weitao Li
清华大学计算机系
RAG RLAgentsMedicine
B
Boran Xiang
School of Management Science and Information Engineering, Hebei University of Economics and Business, Hebei, China
X
Xiaolong Wang
Dept. of Computer Science & Technology, Institute for AI, Tsinghua University, Beijing, China
Z
Zhinan Gou
School of Management Science and Information Engineering, Hebei University of Economics and Business, Hebei, China
Weizhi Ma
Weizhi Ma
Tsinghua University
LLM and AgentsRecommendationAI for Healthcare
Y
Yang Liu
Dept. of Computer Science & Technology, Institute for AI, Tsinghua University, Beijing, China