A Benchmark for Deep Information Synthesis

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current evaluation benchmarks struggle to assess the ability of large language model (LLM) agents to integrate multi-source information and generate deep insights in real-world scenarios. To address this gap, this work proposes DEEPSYNTH—the first benchmark centered on deep information synthesis and structured reasoning through authentic tasks. It comprises 120 complex tasks spanning seven domains and 67 countries, designed via a multi-stage human-crafted pipeline to require coordinated information gathering, synthesis, and reasoning, with verifiable answers provided for each task. Evaluations reveal that even state-of-the-art LLMs and advanced research agents perform poorly on DEEPSYNTH, achieving a maximum F1 score of 8.97 and an LLM-judge score of 17.5, highlighting critical deficiencies in hallucination control and large-scale multi-source reasoning, thereby filling a key void in existing evaluation frameworks.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based agents are increasingly used to solve complex tasks involving tool use, such as web browsing, code execution, and data analysis. However, current evaluation benchmarks do not adequately assess their ability to solve real-world tasks that require synthesizing information from multiple sources and inferring insights beyond simple fact retrieval. To address this, we introduce DEEPSYNTH, a novel benchmark designed to evaluate agents on realistic, time-consuming problems that combine information gathering, synthesis, and structured reasoning to produce insights. DEEPSYNTH contains 120 tasks collected across 7 domains and data sources covering 67 countries. DEEPSYNTH is constructed using a multi-stage data collection pipeline that requires annotators to collect official data sources, create hypotheses, perform manual analysis, and design tasks with verifiable answers. When evaluated on DEEPSYNTH, 11 state-of-the-art LLMs and deep research agents achieve a maximum F1 score of 8.97 and 17.5 on the LLM-judge metric, underscoring the difficulty of the benchmark. Our analysis reveals that current agents struggle with hallucinations and reasoning over large information spaces, highlighting DEEPSYNTH as a crucial benchmark for guiding future research.
Problem

Research questions and friction points this paper is trying to address.

information synthesis
large language models
evaluation benchmark
complex reasoning
multi-source integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

deep information synthesis
LLM-based agents
evaluation benchmark
structured reasoning
multi-source integration
🔎 Similar Papers
No similar papers found.