🤖 AI Summary
Deep search over enterprise heterogeneous data (e.g., documents, meeting notes, Slack messages, GitHub repositories, URLs) requires source-awareness and multi-hop reasoning, yet existing methods fail to retrieve sparse, interlinked evidence comprehensively—severely degrading RAG performance. Method: We introduce the first synthetic benchmark grounded in real business workflows (product planning, development, support), featuring a scalable multi-source data generation pipeline that produces a retrieval corpus of 39,190 artifacts and a multi-hop question set with answer annotations for fine-grained evaluation of long-context LLMs and RAG systems. Contribution/Results: Experiments show state-of-the-art agent-based RAG achieves only 32.96% average accuracy, confirming incomplete evidence retrieval as the fundamental bottleneck. This work is the first to systematically identify, quantify, and benchmark the evidence completeness challenge in deep search, providing critical infrastructure and a formal problem definition for future research.
📝 Abstract
We present a new benchmark for evaluating Deep Search--a realistic and complex form of retrieval-augmented generation (RAG) that requires source-aware, multi-hop reasoning over diverse, sparsed, but related sources. These include documents, meeting transcripts, Slack messages, GitHub, and URLs, which vary in structure and often contain human-to-human interactions. We build it using a synthetic data pipeline that simulates business workflows across product planning, development, and support stages, generating interconnected content with realistic noise and multi-hop questions with guaranteed ground-truth answers. We release our benchmark with both answerable and unanswerable queries, and retrieval pool of 39,190 enterprise artifacts, enabling fine-grained evaluation of long-context LLM and RAG systems. Our experiments reveal that even the best-performing agentic RAG methods achieve an average performance score of 32.96 on our benchmark. With further analysis, we highlight retrieval as the main bottleneck: existing methods struggle to conduct deep searches and retrieve all necessary evidence. Consequently, they often reason over partial context, leading to significant performance degradation.