🤖 AI Summary
This work addresses the high computational cost and difficulty in focusing on task-relevant regions when processing information-dense images—such as infographics and documents—with current large vision-language models, which often generate excessive redundant visual tokens. To mitigate this, the authors propose PinPoint, a two-stage framework that first employs an Instruction-Region Alignment mechanism to precisely localize regions relevant to the given text instruction, and then extracts visual features only from these targeted areas. This approach substantially reduces irrelevant tokens, enhancing both inference efficiency and accuracy. The study also introduces the first annotated dataset specifically designed for instruction-driven region localization and demonstrates state-of-the-art performance on challenging document-based VQA benchmarks, including InfographicVQA.
📝 Abstract
Large Vision-Language Models (LVLMs) have shown strong performance across various multimodal tasks by leveraging the reasoning capabilities of Large Language Models (LLMs). However, processing visually complex and information-rich images, such as infographics or document layouts, requires these models to generate a large number of visual tokens, leading to significant computational overhead. To address this, we propose PinPoint, a novel two-stage framework that first identifies instruction-relevant image regions and then refines them to extract fine-grained visual features for improved reasoning and efficiency. Central to our approach is the Instruction-Region Alignment, which localizes relevant regions using both visual input and textual instructions. We further introduce new annotations that provide richer ground-truth supervision for instruction-relevant regions across challenging VQA benchmarks: InfographicVQA, MultiPageDocVQA, and SinglePageDocVQA. Experimental results show that PinPoint not only achieves superior accuracy compared to existing methods but also reduces computational overhead by minimizing irrelevant visual tokens.