🤖 AI Summary
Current benchmarks for evaluating large language models (LLMs) on structured information extraction (e.g., key-value pairs) lack scalability to domain- or organization-specific documents, as manual construction is costly and limits scale. This paper introduces StructText: the first end-to-end framework for automatically generating high-fidelity textual benchmarks driven by real-world tabular data. It innovatively adopts a two-phase “planning–execution” text generation paradigm to ensure semantic consistency and domain adaptability. We further propose a multidimensional evaluation framework integrating LLM-based judgment with objective metrics—numerical accuracy, temporal consistency, and factual alignment. Empirical validation across 49 datasets and 71,539 samples reveals that while state-of-the-art LLMs achieve high factual accuracy, they still exhibit notable weaknesses in narrative coherence and structural extractability. The framework, benchmark datasets, and baseline implementations are publicly released.
📝 Abstract
Extracting structured information from text, such as key-value pairs that could augment tabular data, is quite useful in many enterprise use cases. Although large language models (LLMs) have enabled numerous automated pipelines for converting natural language into structured formats, there is still a lack of benchmarks for evaluating their extraction quality, especially in specific domains or focused documents specific to a given organization. Building such benchmarks by manual annotations is labour-intensive and limits the size and scalability of the benchmarks. In this work, we present StructText, an end-to-end framework for automatically generating high-fidelity benchmarks for key-value extraction from text using existing tabular data. It uses available tabular data as structured ground truth, and follows a two-stage ``plan-then-execute'' pipeline to synthetically generate corresponding natural-language text. To ensure alignment between text and structured source, we introduce a multi-dimensional evaluation strategy that combines (a) LLM-based judgments on factuality, hallucination, and coherence and (b) objective extraction metrics measuring numeric and temporal accuracy. We evaluated the proposed method on 71,539 examples across 49 datasets. Results reveal that while LLMs achieve strong factual accuracy and avoid hallucination, they struggle with narrative coherence in producing extractable text. Notably, models presume numerical and temporal information with high fidelity yet this information becomes embedded in narratives that resist automated extraction. We release a framework, including datasets, evaluation tools, and baseline extraction systems, to support continued research.