🤖 AI Summary
This work addresses the lack of comprehensive and objective evaluation methodologies in current deep research systems, which often suffer from bias due to coarse metrics or overreliance on large language models. The authors propose the first expert-report-driven, fine-grained evaluation benchmark, encompassing 132 cross-domain research tasks. They construct a structured assessment framework spanning three dimensions—information recall, analysis, and presentation—derived from 9,430 binary scoring criteria authored by domain experts. Through a four-stage human-AI collaborative pipeline combining automated extraction via large language models with over 400 hours of manual review, they establish atomic, verifiable scoring rules. Experimental results reveal that even state-of-the-art systems satisfy fewer than 50% of these expert-defined criteria, highlighting a significant performance gap between current systems and human experts.
📝 Abstract
Deep Research Systems (DRS) aim to help users search the web, synthesize information, and deliver comprehensive investigative reports. However, how to rigorously evaluate these systems remains under-explored. Existing deep-research benchmarks often fall into two failure modes. Some do not adequately test a system's ability to analyze evidence and write coherent reports. Others rely on evaluation criteria that are either overly coarse or directly defined by LLMs (or both), leading to scores that can be biased relative to human experts and are hard to verify or interpret. To address these issues, we introduce Deep Research Bench II, a new benchmark for evaluating DRS-generated reports. It contains 132 grounded research tasks across 22 domains; for each task, a system must produce a long-form research report that is evaluated by a set of 9430 fine-grained binary rubrics in total, covering three dimensions: information recall, analysis, and presentation. All rubrics are derived from carefully selected expert-written investigative articles and are constructed through a four-stage LLM+human pipeline that combines automatic extraction with over 400 human-hours of expert review, ensuring that the criteria are atomic, verifiable, and aligned with human expert judgment. We evaluate several state-of-the-art deep-research systems on Deep Research Bench II and find that even the strongest models satisfy fewer than 50% of the rubrics, revealing a substantial gap between current DRSs and human experts.