🤖 AI Summary
This work addresses the limitations of existing approaches to table understanding: linearized large language models often lose critical layout information, while pure vision-based encoders struggle to preserve precise textual content, and naive fusion of the two modalities frequently induces cross-modal interference. To overcome these challenges, the authors propose DiVA-Former, a lightweight architecture that innovatively employs visual tokens as dynamic queries to distill summary vectors from long textual sequences, thereby enabling efficient and complementary fusion of visual and textual modalities. Evaluated across 13 established table understanding benchmarks, the proposed method achieves an average improvement of 23.9% over current state-of-the-art baselines, including purely text-based, vision-only, and multimodal approaches.
📝 Abstract
LLMs typically linearize 2D tables into 1D sequences to fit their autoregressive architecture, which weakens row-column adjacency and other layout cues. In contrast, purely visual encoders can capture spatial cues, yet often struggle to preserve exact cell text. Our analysis reveals that these two modalities provide highly distinct information to LLMs and exhibit strong complementarity. However, direct concatenation and other fusion methods yield limited gains and frequently introduce cross-modal interference. To address this issue, we propose DiVA-Former, a lightweight architecture designed to effectively integrate vision and text information. DiVA-Former leverages visual tokens as dynamic queries to distill long textual sequences into digest vectors, thereby effectively exploiting complementary vision--text information. Evaluated across 13 table benchmarks, DiVA-Former improves upon the pure-text baseline by 23.9\% and achieves consistent gains over existing baselines using visual inputs, textual inputs, or a combination of both.