🤖 AI Summary
This work addresses the limitations of current large language models in table-based question answering—namely, constrained context length, hallucination, and reliance on single-agent architectures—which hinder their ability to handle complex semantics and multi-hop reasoning. To overcome these challenges, the authors propose DataFactory, a multi-agent collaborative framework wherein a data leader coordinates specialized teams for databases and knowledge graphs to decompose queries into structured and relational subtasks. The framework enables natural language–driven dynamic negotiation and adaptive planning. Key innovations include an automated mapping function \( T:D \times S \times R \rightarrow G \) that transforms tabular data into knowledge graphs, context-aware prompting to mitigate hallucination, and a ReAct-based multi-agent coordination mechanism. Evaluated on TabFact, WikiTableQuestions, and FeTaQA, DataFactory achieves average accuracy gains of 20.2%–23.9% (Cohen’s d > 1), with multi-team collaboration yielding 5.5%–17.1% higher ROUGE-2 scores than single-team variants.
📝 Abstract
Table Question Answering (TableQA) enables natural language interaction with structured tabular data. However, existing large language model (LLM) approaches face critical limitations: context length constraints that restrict data handling capabilities, hallucination issues that compromise answer reliability, and single-agent architectures that struggle with complex reasoning scenarios involving semantic relationships and multi-hop logic. This paper introduces DataFactory, a multi-agent framework that addresses these limitations through specialized team coordination and automated knowledge transformation. The framework comprises a Data Leader employing the ReAct paradigm for reasoning orchestration, together with dedicated Database and Knowledge Graph teams, enabling the systematic decomposition of complex queries into structured and relational reasoning tasks. We formalize automated data-to-knowledge graph transformation via the mapping function T:D x S x R -> G, and implement natural language-based consultation that - unlike fixed workflow multi-agent systems - enables flexible inter-agent deliberation and adaptive planning to improve coordination robustness. We also apply context engineering strategies that integrate historical patterns and domain knowledge to reduce hallucinations and improve query accuracy. Across TabFact, WikiTableQuestions, and FeTaQA, using eight LLMs from five providers, results show consistent gains. Our approach improves accuracy by 20.2% (TabFact) and 23.9% (WikiTQ) over baselines, with significant effects (Cohen's d > 1). Team coordination also outperforms single-team variants (+5.5% TabFact, +14.4% WikiTQ, +17.1% FeTaQA ROUGE-2). The framework offers design guidelines for multi-agent collaboration and a practical platform for enterprise data analysis through integrated structured querying and graph-based knowledge representation.