MATA: Multi-Agent Framework for Reliable and Flexible Table Question Answering

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of reliability, efficiency, and scalability in tabular question answering under resource-constrained or privacy-sensitive settings. The authors propose MATA, a novel framework that integrates a multi-agent collaboration mechanism with diverse reasoning paths to generate candidate answers, followed by a lightweight filtering and refinement module powered by small language models. This design significantly reduces reliance on large language models (LLMs) while maintaining high performance. Evaluated across two benchmark datasets of varying difficulty and using ten different LLMs as backbones, MATA consistently achieves state-of-the-art accuracy and demonstrates highly efficient inference, thereby enhancing both system flexibility and practical applicability without compromising performance.

Technology Category

Application Category

📝 Abstract
Recent advances in Large Language Models (LLMs) have significantly improved table understanding tasks such as Table Question Answering (TableQA), yet challenges remain in ensuring reliability, scalability, and efficiency, especially in resource-constrained or privacy-sensitive environments. In this paper, we introduce MATA, a multi-agent TableQA framework that leverages multiple complementary reasoning paths and a set of tools built with small language models. MATA generates candidate answers through diverse reasoning styles for a given table and question, then refines or selects the optimal answer with the help of these tools. Furthermore, it incorporates an algorithm designed to minimize expensive LLM agent calls, enhancing overall efficiency. MATA maintains strong performance with small, open-source models and adapts easily across various LLM types. Extensive experiments on two benchmarks of varying difficulty with ten different LLMs demonstrate that MATA achieves state-of-the-art accuracy and highly efficient reasoning while avoiding excessive LLM inference. Our results highlight that careful orchestration of multiple reasoning pathways yields scalable and reliable TableQA. The code is available at https://github.com/AIDAS-Lab/MATA.
Problem

Research questions and friction points this paper is trying to address.

Table Question Answering
reliability
scalability
efficiency
resource-constrained environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-Agent Framework
Table Question Answering
Small Language Models
Efficient Reasoning
Complementary Reasoning Paths
🔎 Similar Papers
No similar papers found.
Sieun Hyeon
Sieun Hyeon
Department of Electrical and Computer Engineering, Seoul National University
AINLP
J
Jusang Oh
Interdisciplinary Program in Artificial Intelligence, Seoul National University
S
Sunghwan Steve Cho
Department of Electrical and Computer Engineering, Seoul National University
Jaeyoung Do
Jaeyoung Do
Department of Electrical and Computer Engineering, Seoul National University
Generative AI (LLMs)Multi-Modal AI (NLP/Vision)Big Data Systems