Deep Research Comparator: A Platform For Fine-grained Human Annotations of Deep Research Agents

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing deep research agents—capable of autonomous information retrieval, analysis, and long-form report generation—lack fine-grained evaluation methods for both intermediate reasoning steps and final outputs. Method: We propose the first evaluation framework supporting step-level annotation and cross-agent side-by-side comparison, implemented as an open-source platform with modular LLM integration, frontend-backend separation, and a real-time comparative interface. It enables multi-dimensional human feedback collection and preference ranking computation. We design a structured annotation protocol for granular assessment across report stages and introduce Simple Deepresearch, a lightweight baseline architecture. Contribution/Results: Leveraging preference data from 17 annotators, we evaluate three research agents, demonstrating the framework’s effectiveness and scalability in iterative agent development. The platform and protocols are publicly released to foster reproducible, human-in-the-loop evaluation of deep research systems.

Technology Category

Application Category

📝 Abstract
Effectively evaluating deep research agents that autonomously search the web, analyze information, and generate reports remains a major challenge, particularly when it comes to assessing long reports and giving detailed feedback on their intermediate steps. To address these gaps, we introduce Deep Research Comparator, a platform that offers a holistic framework for deep research agent hosting, side-by-side comparison, fine-grained human feedback collection, and ranking calculation. Given a user query, our platform displays the final reports from two different agents along with their intermediate steps during generation. Annotators can evaluate the overall quality of final reports based on side-by-side comparison, and also provide detailed feedback separately by assessing intermediate steps or specific text spans within the final report. Furthermore, we develop Simple Deepresearch, an end-to-end agent scaffold. This scaffold serves as a baseline that facilitates the easy integration of various large language models to transform them into deep research agents for evaluation. To demonstrate the platform's utility for deep research agent development, we have collected real user preference data from 17 annotators on three deep research agents. A demo video of our platform can be found at https://www.youtube.com/watch?v=g4d2dnbdseg.
Problem

Research questions and friction points this paper is trying to address.

Evaluating autonomous deep research agents' performance effectively
Assessing long reports and intermediate steps with detailed feedback
Providing a platform for side-by-side comparison and human annotations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Platform for side-by-side agent report comparison
Fine-grained human feedback on intermediate steps
End-to-end scaffold for LLM integration
🔎 Similar Papers
No similar papers found.