🤖 AI Summary
This work addresses the limited fine-grained text localization and reasoning capabilities of multimodal large language models (MLLMs) on complex documents. To this end, we introduce NiM—the first benchmark explicitly designed for precise, fine-grained text localization in document images—and propose Spot-IT, a novel method inspired by human reading behavior. Spot-IT integrates intelligent image patch sampling, visual token pruning, and Gaussian-weighted attention to enhance model sensitivity to critical textual details (e.g., nutritional values in menus or disclaimers in newspapers). Experiments reveal that state-of-the-art MLLMs achieve only modest performance on NiM, whereas Spot-IT significantly improves localization accuracy—particularly on layout-complex, clutter-rich documents. This work establishes a new, challenging benchmark for document-level fine-grained understanding and provides an interpretable, computationally efficient methodological framework.
📝 Abstract
While Multi-modal Large Language Models (MLLMs) have shown impressive capabilities in document understanding tasks, their ability to locate and reason about fine-grained details within complex documents remains understudied. Consider searching a restaurant menu for a specific nutritional detail or identifying a disclaimer in a lengthy newspaper article tasks that demand careful attention to small but significant details within a broader narrative, akin to Finding Needles in Images (NiM). To address this gap, we introduce NiM, a carefully curated benchmark spanning diverse real-world documents including newspapers, menus, and lecture images, specifically designed to evaluate MLLMs' capability in these intricate tasks. Building on this, we further propose Spot-IT, a simple yet effective approach that enhances MLLMs capability through intelligent patch selection and Gaussian attention, motivated from how humans zoom and focus when searching documents. Our extensive experiments reveal both the capabilities and limitations of current MLLMs in handling fine-grained document understanding tasks, while demonstrating the effectiveness of our approach. Spot-IT achieves significant improvements over baseline methods, particularly in scenarios requiring precise detail extraction from complex layouts.