🤖 AI Summary
This work proposes a lightweight, single-step framework leveraging large language models (LLMs) for table-based question answering, circumventing the high latency and computational overhead of conventional multi-stage data preparation pipelines. The approach directly generates high-quality table processing programs through reinforcement learning, featuring several key innovations: a self-supervised reward mechanism, variance-aware resampling, operation merging, and an adaptive rollback strategy. Evaluated on two benchmark datasets, the method achieves average accuracy improvements of 9.55 and 6.08 percentage points, respectively, while attaining a table compression rate of 79% and reducing inference costs by a factor of 2.2.
📝 Abstract
Table Question Answering (TQA) aims to answer natural language questions over structured tables. Large Language Models (LLMs) enable promising solutions to this problem, with operator-centric solutions that generate table manipulation pipelines in a multi-step manner offering state-of-the-art performance. However, these solutions rely on multiple LLM calls, resulting in prohibitive latencies and computational costs.
We propose Operation-R1, the first framework that trains lightweight LLMs (e.g., Qwen-4B/1.7B) via a novel variant of reinforcement learning with verifiable rewards to produce high-quality data-preparation pipelines for TQA in a single inference step. To train such an LLM, we first introduce a self-supervised rewarding mechanism to automatically obtain fine-grained pipeline-wise supervision signals for LLM training. We also propose variance-aware group resampling to mitigate training instability. To further enhance robustness of pipeline generation, we develop two complementary mechanisms: operation merge, which filters spurious operations through multi-candidate consensus, and adaptive rollback, which offers runtime protection against information loss in data transformation. Experiments on two benchmark datasets show that, with the same LLM backbone, Operation-R1 achieves average absolute accuracy gains of 9.55 and 6.08 percentage points over multi-step preparation baselines, with 79\% table compression and a 2.2$\times$ reduction in monetary cost.