🤖 AI Summary
Current deep research evaluation systems face significant challenges, including reliance on manual annotations, static evaluation dimensions, and difficulties in fact verification when citations are absent. This work proposes the first end-to-end automated framework that addresses these limitations by introducing a role-driven, two-stage task generation process—assessing both task eligibility and the necessity of external search—to construct realistic research tasks requiring multi-source integration. Furthermore, it incorporates an agent-based dynamic evaluation mechanism that combines adaptive pointwise quality scoring with proactive web-based fact-checking, enabling high-quality task construction and verifiable factual assessment without human annotation. The proposed approach substantially enhances the authenticity, scalability, and reliability of research evaluation.
📝 Abstract
Deep research systems are widely used for multi-step web research, analysis, and cross-source synthesis, yet their evaluation remains challenging. Existing benchmarks often require annotation-intensive task construction, rely on static evaluation dimensions, or fail to reliably verify facts when citations are missing. To bridge these gaps, we introduce DeepResearchEval, an automated framework for deep research task construction and agentic evaluation. For task construction, we propose a persona-driven pipeline generating realistic, complex research tasks anchored in diverse user profiles, applying a two-stage filter Task Qualification and Search Necessity to retain only tasks requiring multi-source evidence integration and external retrieval. For evaluation, we propose an agentic pipeline with two components: an Adaptive Point-wise Quality Evaluation that dynamically derives task-specific evaluation dimensions, criteria, and weights conditioned on each generated task, and an Active Fact-Checking that autonomously extracts and verifies report statements via web search, even when citations are missing.