DR-Arena: an Automated Evaluation Framework for Deep Research Agents

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluation benchmarks rely on static datasets, which struggle to reliably assess deep research agents in terms of task generalization, timeliness, and robustness against data contamination. To address this limitation, this work proposes the first fully automated framework that dynamically constructs evaluation tasks using real-time web information. The framework harvests live trends to generate information trees, designs structured tasks balancing depth of reasoning and breadth of coverage, and employs a state machine–driven adaptive evolution loop to progressively increase task difficulty without human intervention. This approach enables high-fidelity capability assessment in a completely unsupervised manner. Experiments on six state-of-the-art agents demonstrate a Spearman correlation of 0.94 between the framework’s scores and human preferences from the LMSYS Search Arena leaderboard—the strongest alignment achieved to date in unsupervised evaluation.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) increasingly operate as Deep Research (DR) Agents capable of autonomous investigation and information synthesis, reliable evaluation of their task performance has become a critical bottleneck. Current benchmarks predominantly rely on static datasets, which suffer from several limitations: limited task generality, temporal misalignment, and data contamination. To address these, we introduce DR-Arena, a fully automated evaluation framework that pushes DR agents to their capability limits through dynamic investigation. DR-Arena constructs real-time Information Trees from fresh web trends to ensure the evaluation rubric is synchronized with the live world state, and employs an automated Examiner to generate structured tasks testing two orthogonal capabilities: Deep reasoning and Wide coverage. DR-Arena further adopts Adaptive Evolvement Loop, a state-machine controller that dynamically escalates task complexity based on real-time performance, demanding deeper deduction or wider aggregation until a decisive capability boundary emerges. Experiments with six advanced DR agents demonstrate that DR-Arena achieves a Spearman correlation of 0.94 with the LMSYS Search Arena leaderboard. This represents the state-of-the-art alignment with human preferences without any manual efforts, validating DR-Arena as a reliable alternative for costly human adjudication.
Problem

Research questions and friction points this paper is trying to address.

Deep Research Agents
Automated Evaluation
Static Benchmarks
Task Generality
Temporal Misalignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

DR-Arena
Automated Evaluation
Information Trees
Adaptive Evolvement Loop
Deep Research Agents
🔎 Similar Papers
No similar papers found.