🤖 AI Summary
This study investigates the internal mechanisms underlying the strong performance of large language models (LLMs) on tabular understanding tasks, which remain poorly understood despite their empirical success. Through a systematic empirical analysis of 16 LLMs, the work uncovers a three-stage attention pattern in how these models process tabular data, demonstrates that tabular tasks rely more heavily on deeper network layers, and reveals the hierarchical nature of expert activation in Mixture-of-Experts (MoE) architectures. Furthermore, it highlights the critical influence of prompting strategies on model behavior. These insights are derived from comprehensive experiments involving attention visualization, effective depth evaluation, expert activation tracking, and input ablation studies. The findings provide crucial empirical evidence and actionable design guidance for enhancing both the interpretability and performance of LLMs in tabular reasoning.
📝 Abstract
Despite the success of Large Language Models (LLMs) in table understanding, their internal mechanisms remain unclear. In this paper, we conduct an empirical study on 16 LLMs, covering general LLMs, specialist tabular LLMs, and Mixture-of-Experts (MoE) models, to explore how LLMs understand tabular data and perform downstream tasks. Our analysis focus on 4 dimensions including the attention dynamics, the effective layer depth, the expert activation, and the impacts of input designs. Key findings include: (1) LLMs follow a three-phase attention pattern -- early layers scan the table broadly, middle layers localize relevant cells, and late layers amplify their contributions; (2) tabular tasks require deeper layers than math reasoning to reach stable predictions; (3) MoE models activate table-specific experts in middle layers, with early and late layers sharing general-purpose experts; (4) Chain-of-Thought prompting increases table attention, further enhanced by table-tuning. We hope these findings and insights can facilitate interpretability and future research on table-related tasks.