🤖 AI Summary
Poor-quality unstructured table images severely degrade vision-language large model (VLLM) comprehension performance. Method: We propose a training-free, annotation-free visual-language reasoning paradigm centered on the Neighbor-Guided Toolchain Reasoner (NGTR) framework. NGTR employs neighborhood-guided retrieval to orchestrate multiple tools collaboratively and incorporates reflective process supervision for adaptive toolchain composition. The approach integrates lightweight visual preprocessing, retrieval augmentation, and interpretable reasoning—without fine-tuning. Contribution/Results: We introduce the first multi-dimensional benchmark specifically designed for low-quality table images, systematically identifying image quality as the critical bottleneck. Evaluated across multiple public datasets, NGTR significantly improves structural recognition accuracy, demonstrating strong generalization, robustness to image degradation, and the diagnostic utility of our benchmark for analyzing VLLM limitations on real-world tabular data.
📝 Abstract
Pre-trained foundation models have recently significantly progressed in structured table understanding and reasoning. However, despite advancements in areas such as table semantic understanding and table question answering, recognizing the structure and content of unstructured tables using Vision Large Language Models (VLLMs) remains under-explored. In this work, we address this research gap by employing VLLMs in a training-free reasoning paradigm. First, we design a benchmark with various hierarchical dimensions relevant to table recognition. Subsequently, we conduct in-depth evaluations using pre-trained VLLMs, finding that low-quality image input is a significant bottleneck in the recognition process. Drawing inspiration from these findings, we propose the Neighbor-Guided Toolchain Reasoner (NGTR) framework, which is characterized by integrating multiple lightweight models for low-level visual processing operations aimed at mitigating issues with low-quality input images. Specifically, we utilize a neighbor retrieval mechanism to guide the generation of multiple tool invocation plans, transferring tool selection experiences from similar neighbors to the given input, thereby facilitating suitable tool selection. Additionally, we introduce a reflection module to supervise the tool invocation process. Extensive experiments on public table recognition datasets demonstrate that our approach significantly enhances the recognition capabilities of the vanilla VLLMs. We believe that the designed benchmark and the proposed NGTR framework could provide an alternative solution in table recognition.