🤖 AI Summary
Existing deep research frameworks lack systematic evaluation and stage-wise safeguards for report quality—encompassing credibility, coherence, breadth, depth, and safety—rendering them vulnerable to harmful content injection.
Method: We propose a four-stage safeguarded deep research framework that integrates open-domain citation verification and multidimensional quality assessment, deploying dynamic guardrails at the input, planning, research, and output stages.
Contribution/Results: We introduce DRSAFEBENCH, the first stage-aware benchmark for deep research safety, and conduct empirical evaluations using models including GPT-4o and Gemini-2.5-flash. Experimental results demonstrate an average 18.16% improvement in defense success rate and a 6% reduction in over-rejection rate, significantly enhancing both security and reliability while preserving report quality.
📝 Abstract
Deep research frameworks have shown promising capabilities in synthesizing comprehensive reports from web sources. While deep research possesses significant potential to address complex issues through planning and research cycles, existing frameworks are deficient in sufficient evaluation procedures and stage-specific protections. They typically treat evaluation as exact match accuracy of question-answering, but overlook crucial aspects of report quality such as credibility, coherence, breadth, depth, and safety. This oversight may result in hazardous or malicious sources being integrated into the final report. To address these issues, we introduce DEEPRESEARCHGUARD, a comprehensive framework featuring four-stage safeguards with open-domain evaluation of references and reports. We assess performance across multiple metrics, e.g., defense success rate and over-refusal rate, and five key report dimensions. In the absence of a suitable safety benchmark, we introduce DRSAFEBENCH, a stage-wise benchmark for deep research safety. Our evaluation spans diverse state-of-the-art LLMs, including GPT-4o, Gemini-2.5-flash, DeepSeek-v3, and o4-mini. DEEPRESEARCHGUARD achieves an average defense success rate improvement of 18.16% while reducing over-refusal rate by 6%. The input guard provides the most substantial early-stage protection by filtering out obvious risks, while the plan and research guards enhance citation discipline and source credibility. Through extensive experiments, we show that DEEPRESEARCHGUARD enables comprehensive open-domain evaluation and stage-aware defenses that effectively block harmful content propagation, while systematically improving report quality without excessive over-refusal rates. The code can be found via https://github.com/Jasonya/DeepResearchGuard.