RUST-BENCH: Benchmarking LLM Reasoning on Unstructured Text within Structured Tables

📅 2025-11-06
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Existing tabular reasoning benchmarks predominantly target small, homogeneous tables, failing to capture real-world complexities—including long-context inputs, heterogeneous structures (mixing structured fields and free-text), cross-domain applicability (e.g., science, sports), and multi-hop reasoning over thousands of tokens. Method: We introduce TabReal, the first benchmark explicitly designed to reflect realistic complexity in tabular reasoning. It comprises 7,966 questions derived from authentic unstructured–structured hybrid data, with task formulations mandating multi-hop logical chain reasoning. Contribution/Results: TabReal is the first to jointly evaluate models along four critical dimensions: scale, heterogeneity, domain specificity, and reasoning depth. Empirical evaluation reveals substantial performance limitations of both leading open- and closed-source large language models—particularly in heterogeneous schema understanding and long-range reasoning—exposing fundamental bottlenecks in current architectures and prompting strategies. TabReal thus establishes a more challenging and practically relevant benchmark for advancing tabular reasoning research.

Technology Category

Application Category

📝 Abstract
Existing tabular reasoning benchmarks mostly test models on small, uniform tables, underrepresenting the complexity of real-world data and giving an incomplete view of Large Language Models'(LLMs) reasoning abilities. Real tables are long, heterogeneous, and domain-specific, mixing structured fields with free text and requiring multi-hop reasoning across thousands of tokens. To address this gap, we introduce RUST-BENCH, a benchmark of 7966 questions from 2031 real-world tables spanning two domains: i) RB-Science (NSF grant records) and ii) RB-Sports (NBA statistics). Unlike prior work, RUST-BENCH evaluates LLMs jointly across scale, heterogeneity, domain specificity, and reasoning complexity. Experiments with open-source and proprietary models show that LLMs struggle with heterogeneous schemas and complex multi-hop inference, revealing persistent weaknesses in current architectures and prompting strategies. RUST-BENCH establishes a challenging new testbed for advancing tabular reasoning research.
Problem

Research questions and friction points this paper is trying to address.

Benchmarking LLM reasoning on real-world unstructured text tables
Addressing limitations of small uniform tabular reasoning benchmarks
Evaluating multi-hop inference across heterogeneous domain-specific tables
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmark tests LLMs on real-world heterogeneous tables
Evaluates reasoning across scale, domain specificity, complexity
Uses 7966 questions from NSF and NBA data tables
🔎 Similar Papers
No similar papers found.