đ¤ AI Summary
Existing tabular reasoning benchmarks predominantly target small, homogeneous tables, failing to capture real-world complexitiesâincluding long-context inputs, heterogeneous structures (mixing structured fields and free-text), cross-domain applicability (e.g., science, sports), and multi-hop reasoning over thousands of tokens.
Method: We introduce TabReal, the first benchmark explicitly designed to reflect realistic complexity in tabular reasoning. It comprises 7,966 questions derived from authentic unstructuredâstructured hybrid data, with task formulations mandating multi-hop logical chain reasoning.
Contribution/Results: TabReal is the first to jointly evaluate models along four critical dimensions: scale, heterogeneity, domain specificity, and reasoning depth. Empirical evaluation reveals substantial performance limitations of both leading open- and closed-source large language modelsâparticularly in heterogeneous schema understanding and long-range reasoningâexposing fundamental bottlenecks in current architectures and prompting strategies. TabReal thus establishes a more challenging and practically relevant benchmark for advancing tabular reasoning research.
đ Abstract
Existing tabular reasoning benchmarks mostly test models on small, uniform tables, underrepresenting the complexity of real-world data and giving an incomplete view of Large Language Models'(LLMs) reasoning abilities. Real tables are long, heterogeneous, and domain-specific, mixing structured fields with free text and requiring multi-hop reasoning across thousands of tokens. To address this gap, we introduce RUST-BENCH, a benchmark of 7966 questions from 2031 real-world tables spanning two domains: i) RB-Science (NSF grant records) and ii) RB-Sports (NBA statistics). Unlike prior work, RUST-BENCH evaluates LLMs jointly across scale, heterogeneity, domain specificity, and reasoning complexity. Experiments with open-source and proprietary models show that LLMs struggle with heterogeneous schemas and complex multi-hop inference, revealing persistent weaknesses in current architectures and prompting strategies. RUST-BENCH establishes a challenging new testbed for advancing tabular reasoning research.