🤖 AI Summary
This work addresses the challenge of natural language interaction with tabular data in scientific literature. We introduce iTBLS, the first interactive dialogue dataset for academic preprint tables, supporting three core tasks: interpretation, modification, and generation. Methodologically, we propose a novel paradigm decoupling table interaction into three distinct modes and design a multi-step chain-of-reasoning framework that integrates structured table representation, natural language alignment, and parameter-efficient fine-tuning (LoRA/Adapter). Building upon zero-shot prompting, our approach achieves state-of-the-art performance on iTBLS: +15% accuracy in interpretation, +18% F1 in modification, and +38% BLEU in generation—substantially outperforming single-step fine-tuning baselines. Our key contributions are (1) establishing the first multi-task benchmark for scientific table dialogue, and (2) introducing the “paradigm decoupling + multi-step reasoning” framework, advancing table intelligence toward interpretability, editability, and scalability.
📝 Abstract
This paper introduces Interactive Tables (iTBLS), a dataset of interactive conversations situated in tables from scientific articles. This dataset is designed to facilitate human-AI collaborative problem-solving through AI-powered multi-task tabular capabilities. In contrast to prior work that models interactions as factoid QA or procedure synthesis, iTBLS broadens the scope of interactions to include mathematical reasoning, natural language manipulation, and expansion of existing tables from natural language conversation by delineating interactions into one of three tasks: interpretation, modification, or generation. Additionally, the paper presents a suite of baseline approaches to iTBLS, utilizing zero-shot prompting and parameter-efficient fine-tuning for different computing situations. We also introduce a novel multi-step approach and show how it can be leveraged in conjunction with parameter-efficient fine-tuning to achieve the state-of-the-art on iTBLS; outperforming standard parameter-efficient fine-tuning by up to 15% on interpretation, 18% on modification, and 38% on generation.