Table Question Answering in the Era of Large Language Models: A Comprehensive Survey of Tasks, Methods, and Evaluation

📅 2025-10-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Table Question Answering (TQA) faces fundamental challenges in the large language model (LLM) era, including ambiguous task definitions, inconsistent evaluation protocols, and difficulties in handling complex queries and multimodal table representations. This work presents the first systematic survey of LLM-based TQA research, introducing a unified task formulation and a four-dimensional taxonomy—covering input representation, reasoning paradigms, context integration, and optimization strategies—and incorporating emerging methodologies such as reinforcement learning into the TQA framework for the first time. We establish a comprehensive horizontal benchmark spanning 32 datasets and 50+ models, exposing critical limitations in logical reasoning, long-table processing, and cross-modal alignment. Beyond clarifying technical boundaries, our analysis identifies three key future directions: structure-aware table encoding, interpretable multi-step reasoning, and continual learning for real-world deployment—thereby providing a structured roadmap for advancing the field.

Technology Category

Application Category

📝 Abstract
Table Question Answering (TQA) aims to answer natural language questions about tabular data, often accompanied by additional contexts such as text passages. The task spans diverse settings, varying in table representation, question/answer complexity, modality involved, and domain. While recent advances in large language models (LLMs) have led to substantial progress in TQA, the field still lacks a systematic organization and understanding of task formulations, core challenges, and methodological trends, particularly in light of emerging research directions such as reinforcement learning. This survey addresses this gap by providing a comprehensive and structured overview of TQA research with a focus on LLM-based methods. We provide a comprehensive categorization of existing benchmarks and task setups. We group current modeling strategies according to the challenges they target, and analyze their strengths and limitations. Furthermore, we highlight underexplored but timely topics that have not been systematically covered in prior research. By unifying disparate research threads and identifying open problems, our survey offers a consolidated foundation for the TQA community, enabling a deeper understanding of the state of the art and guiding future developments in this rapidly evolving area.
Problem

Research questions and friction points this paper is trying to address.

Systematically organizes table question answering tasks and methods
Analyzes LLM-based approaches for tabular data comprehension
Identifies underexplored research directions in multimodal table understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based methods for table question answering
Categorization of benchmarks and task setups
Grouping modeling strategies by challenge types
🔎 Similar Papers
No similar papers found.