An Empirical Study on Reinforcement Learning for Reasoning-Search Interleaved LLM Agents

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the optimization of RL-based LLM agents where reasoning and search are tightly interleaved. Addressing three critical factors—reward modeling, LLM initialization, and the role of search engines—we propose a structured reward function (demonstrating superiority of format validation rewards over intermediate retrieval rewards), introduce an inference-specialized LLM initialization strategy, and uncover the profound impact of search engines on training dynamics and robustness. Built upon the PPO framework, our approach integrates multi-source search APIs (Bing/Google) and multi-scale LLMs (Llama3/Qwen/DeepSeek-R1), achieving 12.3% higher answer accuracy and a 27% improvement in retrieval-reasoning coordination success on HotpotQA and SciFact. Contributions include: (i) the first systematic decoupling of search and reasoning roles within RL-based agent learning; (ii) a reusable, empirically grounded training configuration guide; and (iii) open-sourcing of all code and experimental configurations.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has demonstrated strong potential in training large language models (LLMs) capable of complex reasoning for real-world problem solving. More recently, RL has been leveraged to create sophisticated LLM-based search agents that adeptly combine reasoning with search engine use. While the use of RL for training search agents is promising, the optimal design of such agents remains not fully understood. In particular, key factors -- such as (1) reward formulation, (2) the choice and characteristics of the underlying LLM, and (3) the role of the search engine in the RL process -- require further investigation. In this work, we conduct comprehensive empirical studies to systematically investigate these and offer actionable insights. We highlight several key findings: format rewards are effective in improving final performance, whereas intermediate retrieval rewards have limited impact; the scale and initialization of the LLM (general-purpose vs. reasoning-specialized) significantly influence RL outcomes; and the choice of search engine plays a critical role in shaping RL training dynamics and the robustness of the trained agent during inference. These establish important guidelines for successfully building and deploying LLM-based search agents in real-world applications. Code is available at https://github.com/PeterGriffinJin/Search-R1.
Problem

Research questions and friction points this paper is trying to address.

Optimizing reward design for RL-trained LLM search agents
Evaluating LLM scale and specialization impact on RL outcomes
Assessing search engine influence on RL training dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning trains reasoning-search LLM agents
Reward formulation and LLM choice impact RL outcomes
Search engine selection affects RL training dynamics
🔎 Similar Papers
No similar papers found.