🤖 AI Summary
This work addresses the limitations of existing evaluation frameworks for Deep Research Agents (DRAs), which predominantly rely on large language model–generated references without expert validation, thereby lacking fine-grained and objective measures of writing quality and factual verifiability. To bridge this gap, we propose the Wiki Live Challenge (WLC)—a dynamic benchmark based on recent Wikipedia Good Articles—and introduce, for the first time, expert-authored content as a gold standard. We further develop Wiki Eval, a comprehensive evaluation framework encompassing 39 fine-grained writing quality metrics and rigorous factual verifiability measures. Experiments on 100 high-quality articles reveal a substantial performance gap between current DRA systems and human experts, demonstrating WLC’s effectiveness and challenge in advancing agent-based research.
📝 Abstract
Deep Research Agents (DRAs) have demonstrated remarkable capabilities in autonomous information retrieval and report generation, showing great potential to assist humans in complex research tasks. Current evaluation frameworks primarily rely on LLM-generated references or LLM-derived evaluation dimensions. While these approaches offer scalability, they often lack the reliability of expert-verified content and struggle to provide objective, fine-grained assessments of critical dimensions. To bridge this gap, we introduce Wiki Live Challenge (WLC), a live benchmark that leverages the newest Wikipedia Good Articles (GAs) as expert-level references. Wikipedia's strict standards for neutrality, comprehensiveness, and verifiability serve as a great challenge for DRAs, with GAs representing the pinnacle of which. We curate a dataset of 100 recent Good Articles and propose Wiki Eval, a comprehensive evaluation framework comprising a fine-grained evaluation method with 39 criteria for writing quality and rigorous metrics for factual verifiability. Extensive experiments on various DRA systems demonstrate a significant gap between current DRAs and human expert-level Wikipedia articles, validating the effectiveness of WLC in advancing agent research. We release our benchmark at https://github.com/WangShao2000/Wiki_Live_Challenge