PaperSearchQA: Learning to Search and Reason over Scientific Papers with RLVR

๐Ÿ“… 2026-01-26
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing reinforcement learningโ€“based search agents struggle to meet the technical demands of scientific question answering. This work introduces, for the first time, the Reinforcement Learning with Verifiable Rewards (RLVR) framework to scientific literature QA, constructing a retrieval corpus of 16 million biomedical paper abstracts and a fact-based question-answering dataset, PaperSearchQA, comprising 60,000 samples. Building upon the Search-R1 architecture, the proposed approach not only establishes a scalable data curation pipeline but also demonstrates emergent agent capabilities in planning, reasoning, and self-verification. Experimental results show that the trained agent significantly outperforms non-RL retrieval baselines. All resources, including the dataset and models, have been open-sourced on Hugging Face to facilitate multi-domain extension and further research.

Technology Category

Application Category

๐Ÿ“ Abstract
Search agents are language models (LMs) that reason and search knowledge bases (or the web) to answer questions; recent methods supervise only the final answer accuracy using reinforcement learning with verifiable rewards (RLVR). Most RLVR search agents tackle general-domain QA, which limits their relevance to technical AI systems in science, engineering, and medicine. In this work we propose training agents to search and reason over scientific papers -- this tests technical question-answering, it is directly relevant to real scientists, and the capabilities will be crucial to future AI Scientist systems. Concretely, we release a search corpus of 16 million biomedical paper abstracts and construct a challenging factoid QA dataset called PaperSearchQA with 60k samples answerable from the corpus, along with benchmarks. We train search agents in this environment to outperform non-RL retrieval baselines; we also perform further quantitative analysis and observe interesting agent behaviors like planning, reasoning, and self-verification. Our corpus, datasets, and benchmarks are usable with the popular Search-R1 codebase for RLVR training and released on https://huggingface.co/collections/jmhb/papersearchqa. Finally, our data creation methods are scalable and easily extendable to other scientific domains.
Problem

Research questions and friction points this paper is trying to address.

scientific question answering
search agents
reinforcement learning
biomedical literature
technical QA
Innovation

Methods, ideas, or system contributions that make the work stand out.

RLVR
scientific QA
search agents
PaperSearchQA
reinforcement learning
๐Ÿ”Ž Similar Papers
No similar papers found.