๐ค AI Summary
Large language models (LLMs) struggle with complex table-based question answering (TQA), facing challenges in accurately locating relevant cells/relations and performing reliable logical reasoning. To address this, we propose the Seek-and-Solve reasoning paradigm: first precisely identifying salient cells and their semantic relations, then performing structured computation grounded in these locational cuesโunified into a single SS-CoT (Seek-and-Solve Chain-of-Thought) reasoning chain. We empirically demonstrate that intermediate reasoning steps during task simplification are pedagogically more valuable than final simplified outputs, motivating a distillable one-step prompting scheme and an In-Context Learning demonstration distillation strategy guided by SS-CoT paths. Evaluated across multiple TQA benchmarks, our approach significantly improves accuracy and robustness while maintaining inference efficiency and cross-table generalization, validating that explicit stepwise reasoning effectively unlocks LLMsโ deeper reasoning capabilities.
๐ Abstract
The complexities of table structures and question logic make table-based question answering (TQA) tasks challenging for Large Language Models (LLMs), often requiring task simplification before solving. This paper reveals that the reasoning process during task simplification may be more valuable than the simplified tasks themselves and aims to improve TQA performance by leveraging LLMs' reasoning capabilities. We propose a Seek-and-Solve pipeline that instructs the LLM to first seek relevant information and then answer questions, integrating these two stages at the reasoning level into a coherent Seek-and-Solve Chain of Thought (SS-CoT). Additionally, we distill a single-step TQA-solving prompt from this pipeline, using demonstrations with SS-CoT paths to guide the LLM in solving complex TQA tasks under In-Context Learning settings. Our experiments show that our approaches result in improved performance and reliability while being efficient. Our findings emphasize the importance of eliciting LLMs' reasoning capabilities to handle complex TQA tasks effectively.