🤖 AI Summary
Existing deep research systems heavily rely on proprietary commercial search APIs, resulting in irreproducible experiments, opaque evaluation, and high operational costs. This paper introduces the first open-source evaluation sandbox for dense retrieval, integrating a reproducible dense retrieval API—built upon DiskANN and indexed over ClueWeb22/FineWeb—and an LLM-as-a-judge automatic evaluation protocol. The sandbox enables zero-cost scientific reproducibility, stable ranking, and human-preference alignment validation. Evaluated on the extended Researchy Questions benchmark, our system achieves lower latency than mainstream commercial APIs while matching their ranking performance. Moreover, LLM-based judgments exhibit strong agreement with human annotations (Spearman ρ > 0.92). All components—including source code, prebuilt indices, and comprehensive documentation—are publicly released under an open-source license.
📝 Abstract
Deep research systems represent an emerging class of agentic information retrieval methods that generate comprehensive and well-supported reports to complex queries. However, most existing frameworks rely on dynamic commercial search APIs, which pose reproducibility and transparency challenges in addition to their cost. To address these limitations, we introduce DeepResearchGym, an open-source sandbox that combines a reproducible search API with a rigorous evaluation protocol for benchmarking deep research systems. The API indexes large-scale public web corpora, namely ClueWeb22 and FineWeb, using a state-of-the-art dense retriever and approximate nearest neighbor search via DiskANN. It achieves lower latency than popular commercial APIs while ensuring stable document rankings across runs, and is freely available for research use. To evaluate deep research systems' outputs, we extend the Researchy Questions benchmark with automatic metrics through LLM-as-a-judge assessments to measure alignment with users' information needs, retrieval faithfulness, and report quality. Experimental results show that systems integrated with DeepResearchGym achieve performance comparable to those using commercial APIs, with performance rankings remaining consistent across evaluation metrics. A human evaluation study further confirms that our automatic protocol aligns with human preferences, validating the framework's ability to help support controlled assessment of deep research systems. Our code and API documentation are available at https://www.deepresearchgym.ai.