🤖 AI Summary
High-resolution image understanding remains a critical bottleneck for multimodal large language models (MLLMs). Existing region-based retrieval-augmented methods often fragment coherent objects, distorting semantic similarity computation. To address this, we propose a synergistic framework integrating multi-resolution semantic fusion with sliding-window open-vocabulary detection. First, we extract multi-scale image features and construct a cross-resolution semantic similarity graph to mitigate segmentation-induced semantic bias. Second, we employ a training-free sliding-window detector for global, fine-grained object localization and semantic alignment. Our approach seamlessly integrates pretrained retrieval-augmented generation (RAG) models with open-vocabulary detection capabilities—requiring no additional annotations or model fine-tuning. Evaluated on multiple high-resolution understanding benchmarks, it consistently improves the performance of mainstream MLLMs, demonstrating the effectiveness of cross-scale semantic modeling and zero-shot localization.
📝 Abstract
Understanding high-resolution images remains a significant challenge for multimodal large language models (MLLMs). Recent study address this issue by dividing the image into smaller crops and computing the semantic similarity between each crop and a query using a pretrained retrieval-augmented generation (RAG) model. The most relevant crops are then selected to localize the target object and suppress irrelevant information. However, such crop-based processing can fragment complete objects across multiple crops, thereby disrupting the computation of semantic similarity. In our experiments, we find that image crops of objects with different sizes are better handled at different resolutions. Based on this observation, we propose Multi-resolution Retrieval-Detection (MRD), a training-free framework for high-resolution image understanding. To address the issue of semantic similarity bias caused by objects being split across different image crops, we propose a multi-resolution semantic fusion method, which integrates semantic similarity maps obtained at different resolutions to produce more accurate semantic information and preserve the integrity of target objects. Furthermore, to achieve direct localization of target objects at a global scale, we introduce an open-vocalbulary object detection (OVD) model that identifies object regions using a sliding-window approach.Experiments on high-resolution image understanding benchmarks using different MLLMs demonstrate the effectiveness of our approach.