R1-Searcher++: Incentivizing the Dynamic Knowledge Acquisition of LLMs via Reinforcement Learning

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucinations due to reliance on static, outdated internal knowledge. Method: This paper proposes R1-Searcher++, a framework enabling adaptive synergy between internal knowledge and external retrieval. It introduces a novel two-stage training paradigm: (1) supervised fine-tuning (SFT) for cold-start initialization, followed by (2) outcome-guided reinforcement learning (RL) to dynamically acquire and integrate external evidence. A dedicated internal-knowledge utilization reward and continuous memory modeling mechanism are incorporated to ensure organic fusion and evolutionary refinement of internal and retrieved knowledge. Contribution/Results: Experiments demonstrate that R1-Searcher++ significantly outperforms state-of-the-art RAG and reasoning methods across multi-task reasoning accuracy, retrieval efficiency, and generalization. It simultaneously reduces retrieval overhead and enables continual capability enhancement through iterative knowledge integration.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are powerful but prone to hallucinations due to static knowledge. Retrieval-Augmented Generation (RAG) helps by injecting external information, but current methods often are costly, generalize poorly, or ignore the internal knowledge of the model. In this paper, we introduce R1-Searcher++, a novel framework designed to train LLMs to adaptively leverage both internal and external knowledge sources. R1-Searcher++ employs a two-stage training strategy: an initial SFT Cold-start phase for preliminary format learning, followed by RL for Dynamic Knowledge Acquisition. The RL stage uses outcome-supervision to encourage exploration, incorporates a reward mechanism for internal knowledge utilization, and integrates a memorization mechanism to continuously assimilate retrieved information, thereby enriching the model's internal knowledge. By leveraging internal knowledge and external search engine, the model continuously improves its capabilities, enabling efficient retrieval-augmented reasoning. Our experiments demonstrate that R1-Searcher++ outperforms previous RAG and reasoning methods and achieves efficient retrieval. The code is available at https://github.com/RUCAIBox/R1-Searcher-plus.
Problem

Research questions and friction points this paper is trying to address.

Addresses LLM hallucinations from static knowledge
Improves retrieval-augmented generation cost and generalization
Balances internal and external knowledge utilization dynamically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for dynamic knowledge acquisition
Combines internal and external knowledge sources adaptively
Integrates memorization mechanism to enrich model knowledge
🔎 Similar Papers
No similar papers found.