๐ค AI Summary
This work addresses the over-reliance on external search engines in reinforcement learning (RL). We propose Self-Search RL, the first framework to treat large language models (LLMs) as trainable, autonomous search simulators. Methodologically, we explicitly model the search process via structured prompting, repeated sampling, and fine-grained reward shaping based on output format and domain-specific rulesโthereby mitigating hallucination and enabling end-to-end self-iterative optimization. Our core contribution is endowing LLMs with intrinsic, RL-trainable search capabilities, eliminating the need for external search APIs. Experiments demonstrate that the trained policy significantly improves pass@k on open-domain question answering, drastically reduces dependency on external search, and exhibits strong cross-task transferability and promising generalization from simulation to real-world deployment.
๐ Abstract
We investigate the potential of large language models (LLMs) to serve as efficient simulators for agentic search tasks in reinforcement learning (RL), thereby reducing dependence on costly interactions with external search engines. To this end, we first quantify the intrinsic search capability of LLMs via structured prompting and repeated sampling, which we term Self-Search. Our results reveal that LLMs exhibit strong scaling behavior with respect to the inference budget, achieving high pass@k on question-answering benchmarks, including the challenging BrowseComp task. Building on these observations, we introduce Self-Search RL (SSRL), which enhances LLMs' Self-Search capability through format-based and rule-based rewards. SSRL enables models to iteratively refine their knowledge utilization internally, without requiring access to external tools. Empirical evaluations demonstrate that SSRL-trained policy models provide a cost-effective and stable environment for search-driven RL training, reducing reliance on external search engines and facilitating robust sim-to-real transfer. We draw the following conclusions: 1) LLMs possess world knowledge that can be effectively elicited to achieve high performance; 2) SSRL demonstrates the potential of leveraging internal knowledge to reduce hallucination; 3) SSRL-trained models integrate seamlessly with external search engines without additional effort. Our findings highlight the potential of LLMs to support more scalable RL agent training.