DeepResearchGym: A Free, Transparent, and Reproducible Evaluation Sandbox for Deep Research

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep research systems heavily rely on proprietary commercial search APIs, resulting in irreproducible experiments, opaque evaluation, and high operational costs. This paper introduces the first open-source evaluation sandbox for dense retrieval, integrating a reproducible dense retrieval API—built upon DiskANN and indexed over ClueWeb22/FineWeb—and an LLM-as-a-judge automatic evaluation protocol. The sandbox enables zero-cost scientific reproducibility, stable ranking, and human-preference alignment validation. Evaluated on the extended Researchy Questions benchmark, our system achieves lower latency than mainstream commercial APIs while matching their ranking performance. Moreover, LLM-based judgments exhibit strong agreement with human annotations (Spearman ρ > 0.92). All components—including source code, prebuilt indices, and comprehensive documentation—are publicly released under an open-source license.

Technology Category

Application Category

📝 Abstract
Deep research systems represent an emerging class of agentic information retrieval methods that generate comprehensive and well-supported reports to complex queries. However, most existing frameworks rely on dynamic commercial search APIs, which pose reproducibility and transparency challenges in addition to their cost. To address these limitations, we introduce DeepResearchGym, an open-source sandbox that combines a reproducible search API with a rigorous evaluation protocol for benchmarking deep research systems. The API indexes large-scale public web corpora, namely ClueWeb22 and FineWeb, using a state-of-the-art dense retriever and approximate nearest neighbor search via DiskANN. It achieves lower latency than popular commercial APIs while ensuring stable document rankings across runs, and is freely available for research use. To evaluate deep research systems' outputs, we extend the Researchy Questions benchmark with automatic metrics through LLM-as-a-judge assessments to measure alignment with users' information needs, retrieval faithfulness, and report quality. Experimental results show that systems integrated with DeepResearchGym achieve performance comparable to those using commercial APIs, with performance rankings remaining consistent across evaluation metrics. A human evaluation study further confirms that our automatic protocol aligns with human preferences, validating the framework's ability to help support controlled assessment of deep research systems. Our code and API documentation are available at https://www.deepresearchgym.ai.
Problem

Research questions and friction points this paper is trying to address.

Reproducibility challenges in deep research systems
High costs of commercial search APIs
Lack of transparent evaluation protocols
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-source sandbox with reproducible search API
Dense retriever and DiskANN for efficient search
LLM-as-a-judge for automatic evaluation metrics
🔎 Similar Papers
No similar papers found.
J
João Coelho
Carnegie Mellon University
J
Jingjie Ning
Carnegie Mellon University
J
Jingyuan He
Carnegie Mellon University
K
Kangrui Mao
Carnegie Mellon University
A
A. Paladugu
Carnegie Mellon University
P
Pranav Setlur
Carnegie Mellon University
Jiahe Jin
Jiahe Jin
Shanghai Jiao Tong University
Artificial IntelligenceDeep LearningNatural Language Processing
J
James P. Callan
Carnegie Mellon University
J
João Magalhães
NOV A LINCS
Bruno Martins
Bruno Martins
Instituto Superior Técnico and INESC-ID, University of Lisbon
Data ScienceLanguage TechnologiesInformation RetrievalGeospatial A.I.
Chenyan Xiong
Chenyan Xiong
Associate Professor, Carnegie Mellon University
Information RetrievalLanguage ModelsNatural Language Understanding.