🤖 AI Summary
This work addresses the lack of systematic evaluation of critical capabilities—such as reasoning and conflict resolution—in existing document-to-table (Doc2Table) extraction benchmarks. To this end, we propose DTBench, the first capability-oriented synthetic benchmark, which leverages an inverse Table2Doc paradigm and a multi-agent collaborative workflow to automatically generate diverse documents from real-world tables, covering five major categories and thirteen subcategories of required abilities. This approach overcomes the limitations of high annotation costs and limited coverage inherent in manual curation, enabling a scalable and high-quality evaluation framework. Empirical results reveal significant deficiencies in mainstream large language models regarding faithfulness, reasoning, and conflict resolution, establishing DTBench as a public, comprehensive platform for advancing Doc2Table research.
📝 Abstract
Document-to-table (Doc2Table) extraction derives structured tables from unstructured documents under a target schema, enabling reliable and verifiable SQL-based data analytics. Although large language models (LLMs) have shown promise in flexible information extraction, their ability to produce precisely structured tables remains insufficiently understood, particularly for indirect extraction that requires complex capabilities such as reasoning and conflict resolution. Existing benchmarks neither explicitly distinguish nor comprehensively cover the diverse capabilities required in Doc2Table extraction.We argue that a capability-aware benchmark is essential for systematic evaluation. However, constructing such benchmarks using human-annotated document-table pairs is costly, difficult to scale, and limited in capability coverage. To address this, we adopt a reverse Table2Doc paradigm and design a multi-agent synthesis workflow to generate documents from ground-truth tables. Based on this approach, we present DTBench, a synthetic benchmark that adopts a proposed two-level taxonomy of Doc2Table capabilities, covering 5 major categories and 13 subcategories. We evaluate several mainstream LLMs on DTBench, and demonstrate substantial performance gaps across models, as well as persistent challenges in reasoning, faithfulness, and conflict resolution. DTBench provides a comprehensive testbed for data generation and evaluation, facilitating future research on Doc2Table extraction. The benchmark is publicly available at https://github.com/ZJU-DAILY/DTBench.