🤖 AI Summary
This work addresses the critical limitation of existing open-domain research report generation methods, which often lack effective mechanisms to assess content credibility, thereby risking hallucination and misinformation. To mitigate this, the paper introduces a Deep Research Agent that, for the first time in settings without ground-truth references, incorporates a progressive confidence estimation and calibration framework. By leveraging deep retrieval and multi-hop reasoning, the agent anchors each claim to verifiable evidence and assigns an interpretable confidence score. Integrating a structured workflow with cognitive modeling of confidence, the proposed approach substantially enhances the transparency, interpretability, and user trustworthiness of generated reports while effectively suppressing hallucinatory and misleading content.
📝 Abstract
As agent-based systems continue to evolve, deep research agents are capable of automatically generating research-style reports across diverse domains. While these agents promise to streamline information synthesis and knowledge exploration, existing evaluation frameworks-typically based on subjective dimensions-fail to capture a critical aspect of report quality: trustworthiness. In open-ended research scenarios where ground-truth answers are unavailable, current evaluation methods cannot effectively measure the epistemic confidence of generated content, making calibration difficult and leaving users susceptible to misleading or hallucinated information. To address this limitation, we propose a novel deep research agent that incorporates progressive confidence estimation and calibration within the report generation pipeline. Our system leverages a deliberative search model, featuring deep retrieval and multi-hop reasoning to ground outputs in verifiable evidence while assigning confidence scores to individual claims. Combined with a carefully designed workflow, this approach produces trustworthy reports with enhanced transparency. Experimental results and case studies demonstrate that our method substantially improves interpretability and significantly increases user trust.