🤖 AI Summary
Current deep research agents (DRAs) exhibit significant deficiencies in generating analyst-grade comprehensive reports: mainstream evaluations focus narrowly on question-answering tasks, neglecting report generation capabilities; existing benchmarks suffer from low task complexity and subjective metrics, failing to reflect real-world requirements. To address these gaps, we introduce FINDER—a fine-grained Deep Research benchmark comprising 100 manually curated research tasks and 419 structured evaluation items—and DEFT, the first systematic taxonomy of DRA failure modes. Leveraging human–LLM collaborative annotation and grounded theory analysis, we conduct a multidimensional empirical evaluation of leading DRAs, revealing that their core bottlenecks lie in evidence integration, cross-source verification, and resilient reasoning planning—not task understanding. FINDER and DEFT establish a reproducible, scalable paradigm for standardized DRA evaluation and capability advancement.
📝 Abstract
Deep Research Agents (DRAs) aim to automatically produce analyst-level reports through iterative information retrieval and synthesis. However, most existing DRAs were validated on question-answering benchmarks, while research on generating comprehensive reports remains overlooked. Worse, current benchmarks for report synthesis suffer from task complexity and subjective metrics -- this fails to reflect user demands and limits the practical utility of generated reports. To address these gaps, we present Fine-grained DEepResearch bench (FINDER), an enhanced benchmark consisting of 100 human-curated research tasks with 419 structured checklist items that standardize report structure, analytical depth, and factual grounding. Based on approximately 1,000 reports produced by mainstream DRAs, we further propose Deep rEsearch Failure Taxonomy (DEFT), the first failure taxonomy for deep research agents. DEFT contains 14 fine-grained failure modes across reasoning, retrieval, and generation, and is built upon grounded theory with human-LLM co-annotating and inter-annotator reliability validation. Our experimental findings reveal that current DRAs struggle not with task comprehension but with evidence integration, verification, and reasoning-resilient planning.