🤖 AI Summary
This work addresses the lack of benchmarks that model and evaluate user interaction processes in dynamic, ambiguous research scenarios within existing deep research agent evaluations. We propose the first systematic benchmark for interactive deep research, featuring a modular multi-agent architecture and an extensible reference-guided user simulator. For the first time, our framework jointly incorporates dynamic user feedback, interaction cost (measured in turns and tokens), and research quality into a unified evaluation paradigm. Experiments across seven prominent large language models demonstrate that interactivity substantially enhances both research quality and robustness—often outweighing differences due to model scale—and reveal significant disparities among models in interaction efficiency.
📝 Abstract
Deep research agents powered by Large Language Models (LLMs) can perform multi-step reasoning, web exploration, and long-form report generation. However, most existing systems operate in an autonomous manner, assuming fully specified user intent and evaluating only final outputs. In practice, research goals are often underspecified and evolve during exploration, making sustained interaction essential for robust alignment. Despite its importance, interaction remains largely invisible to existing deep research benchmarks, which neither model dynamic user feedback nor quantify its costs. We introduce IDRBench, the first benchmark for systematically evaluating interactive deep research. IDRBench combines a modular multi-agent research framework with on-demand interaction, a scalable reference-grounded user simulator, and an interaction-aware evaluation suite that jointly measures interaction benefits (quality and alignment) and costs (turns and tokens). Experiments across seven state-of-the-art LLMs show that interaction consistently improves research quality and robustness, often outweighing differences in model capacity, while revealing substantial trade-offs in interaction efficiency.