🤖 AI Summary
In Table VQA, large vision-language models (VLMs) achieve end-to-end effectiveness but incur prohibitive computational costs and deployment challenges due to their reliance on massive parameters for fine-grained table structure understanding; lightweight OCR+LLM approaches, meanwhile, suffer from brittle reasoning caused by unstructured text representations and OCR errors. To address this, we propose TALENT: a framework leveraging a compact VLM to jointly generate OCR outputs and natural-language table descriptions—yielding a dual-modal input—and subsequently employing an LLM to perform multi-step quantitative reasoning, thereby shifting the paradigm toward LLM-centric collaborative multimodal understanding. We further introduce ReTabVQA, a more challenging benchmark emphasizing complex, multi-step numerical reasoning. Experiments demonstrate that TALENT matches or surpasses state-of-the-art large end-to-end VLMs on both public benchmarks and ReTabVQA, while incurring significantly lower computational overhead.
📝 Abstract
Table Visual Question Answering (Table VQA) is typically addressed by large vision-language models (VLMs). While such models can answer directly from images, they often miss fine-grained details unless scaled to very large sizes, which are computationally prohibitive, especially for mobile deployment. A lighter alternative is to have a small VLM perform OCR and then use a large language model (LLM) to reason over structured outputs such as Markdown tables. However, these representations are not naturally optimized for LLMs and still introduce substantial errors. We propose TALENT (Table VQA via Augmented Language-Enhanced Natural-text Transcription), a lightweight framework that leverages dual representations of tables. TALENT prompts a small VLM to produce both OCR text and natural language narration, then combines them with the question for reasoning by an LLM. This reframes Table VQA as an LLM-centric multimodal reasoning task, where the VLM serves as a perception-narration module rather than a monolithic solver. Additionally, we construct ReTabVQA, a more challenging Table VQA dataset requiring multi-step quantitative reasoning over table images. Experiments show that TALENT enables a small VLM-LLM combination to match or surpass a single large VLM at significantly lower computational cost on both public datasets and ReTabVQA.