DeepResearchEval: An Automated Framework for Deep Research Task Construction and Agentic Evaluation

📅 2026-01-14
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Current deep research evaluation systems face significant challenges, including reliance on manual annotations, static evaluation dimensions, and difficulties in fact verification when citations are absent. This work proposes the first end-to-end automated framework that addresses these limitations by introducing a role-driven, two-stage task generation process—assessing both task eligibility and the necessity of external search—to construct realistic research tasks requiring multi-source integration. Furthermore, it incorporates an agent-based dynamic evaluation mechanism that combines adaptive pointwise quality scoring with proactive web-based fact-checking, enabling high-quality task construction and verifiable factual assessment without human annotation. The proposed approach substantially enhances the authenticity, scalability, and reliability of research evaluation.

Technology Category

Application Category

📝 Abstract
Deep research systems are widely used for multi-step web research, analysis, and cross-source synthesis, yet their evaluation remains challenging. Existing benchmarks often require annotation-intensive task construction, rely on static evaluation dimensions, or fail to reliably verify facts when citations are missing. To bridge these gaps, we introduce DeepResearchEval, an automated framework for deep research task construction and agentic evaluation. For task construction, we propose a persona-driven pipeline generating realistic, complex research tasks anchored in diverse user profiles, applying a two-stage filter Task Qualification and Search Necessity to retain only tasks requiring multi-source evidence integration and external retrieval. For evaluation, we propose an agentic pipeline with two components: an Adaptive Point-wise Quality Evaluation that dynamically derives task-specific evaluation dimensions, criteria, and weights conditioned on each generated task, and an Active Fact-Checking that autonomously extracts and verifies report statements via web search, even when citations are missing.
Problem

Research questions and friction points this paper is trying to address.

deep research evaluation
task construction
fact verification
agentic evaluation
benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

persona-driven task generation
agentic evaluation
adaptive quality evaluation
active fact-checking
multi-source evidence integration
🔎 Similar Papers
No similar papers found.
Y
Yibo Wang
Infinity Lab, Shanda Group
L
Lei Wang
Infinity Lab, Shanda Group
Y
Yue Deng
Infinity Lab, Shanda Group
Keming Wu
Keming Wu
Ph.D. Student, Tsinghua University
Computer VisionVision Language ModelsGenerative AI
Y
Yao Xiao
Infinity Lab, Shanda Group
Huanjin Yao
Huanjin Yao
Tsinghua University
LLMMLLM
L
Liwei Kang
Infinity Lab, Shanda Group
Hai Ye
Hai Ye
MiroMind AI; National University of Singapore
Natural Language Processing
Y
Yongcheng Jing
Nanyang Technological University
Lidong Bing
Lidong Bing
MiroMind, Alibaba DAMO, Tencent, CMU, CUHK
Natural Language ProcessingLarge Language ModelsLarge Multimodal Models