🤖 AI Summary
This work systematically evaluates the robustness of large language models (LLMs) on table question answering (TQA), examining the effects of in-context learning, model scale, instruction tuning, and domain shift. Experiments span three benchmark datasets—WTQ, TAT-QA, and SciTab—and involve structured perturbations (format, semantic, and noise) alongside data augmentation. Key findings: (1) instruction tuning yields an average +12.7% accuracy gain and substantially improves robustness; (2) newer models (e.g., GPT-4, Claude series) outperform earlier generations; (3) WTQ suffers from severe data contamination, inflating performance estimates, whereas TAT-QA and SciTab better reflect true generalization capability. The study further advocates for structural-aware self-attention mechanisms and domain-adaptive modeling to enhance reliability and trustworthiness in table understanding.
📝 Abstract
Large Language Models (LLMs), already shown to ace various text comprehension tasks have also remarkably been shown to tackle table comprehension tasks without specific training. While previous research has explored LLM capabilities with tabular dataset tasks, our study assesses the influence of extit{in-context learning}, extit{model scale}, extit{instruction tuning}, and extit{domain biases} on Tabular Question Answering (TQA). We evaluate the robustness of LLMs on Wikipedia-based extbf{WTQ}, financial report-based extbf{TAT-QA}, and scientific claims-based extbf{SCITAB}, TQA datasets, focusing on their ability to interpret tabular data under various augmentations and perturbations robustly. Our findings indicate that instructions significantly enhance performance, with recent models exhibiting greater robustness over earlier versions. However, data contamination and practical reliability issues persist, especially with extbf{WTQ}. We highlight the need for improved methodologies, including structure-aware self-attention mechanisms and better handling of domain-specific tabular data, to develop more reliable LLMs for table comprehension.