🤖 AI Summary
This work addresses the issue of error accumulation in state representations caused by fixed textual serialization in multi-turn table-based question answering, which undermines both reasoning accuracy and efficiency. To mitigate this, the authors propose a training-free inference framework that introduces an action-conditioned multimodal selection strategy to dynamically integrate visual and textual representations. The approach further leverages table metadata—including dimensions, data types, and key values—to safely plan and compress the reasoning trajectory. Empirical results demonstrate that the proposed method outperforms baseline models by 4.87% in accuracy and achieves a 5.42% improvement over static configurations, while simultaneously reducing inference latency by 33.35%.
📝 Abstract
Multimodal reasoning has emerged as a powerful framework for enhancing reasoning capabilities of reasoning models. While multi-turn table reasoning methods have improved reasoning accuracy through tool use and reward modeling, they rely on fixed text serialization for table state readouts. This introduces representation errors in table encoding that significantly accumulate over multiple turns. Such accumulation is alleviated by tabular grounding methods in the expense of inference compute and cost, rendering real world deployment impractical. To address this, we introduce TABQAWORLD, a table reasoning framework that jointly optimizes tabular action through representation and estimation. For representation, TABQAWORLD employs an action-conditioned multimodal selection policy, which dynamically switches between visual and textual representations to maximize table state readout reliability. For estimation, TABQAWORLD optimizes stepwise reasoning trajectory through table metadata including dimension, data types and key values, safely planning trajectory and compressing low-complexity actions to reduce conversation turns and latency. Designed as a training-free framework, empirical evaluations show that TABQAWORLD achieves state-of-the-art performance with 4.87% accuracy improvements over baselines, with 5.42% accuracy gain and 33.35% inference latency reduction over static settings, establishing a new standard for reliable and efficient table reasoning.