🤖 AI Summary
This work addresses the lack of systematic evaluation of multimodal large language models’ (MLLMs) sub-image localization capability under ultra-long visual contexts. To this end, we introduce MMNeedle, a benchmark that constructs long-sequence visual inputs via image tiling and evaluates fine-grained sub-image localization through instruction-driven multi-image retrieval tasks. We propose the first automated sub-image annotation framework for long-context stress testing. Our evaluation reveals, for the first time, a substantial performance gap (>30% accuracy) between API-based models (e.g., GPT-4o) and open-source MLLMs under extended visual contexts, alongside distinct hallucination patterns—particularly pronounced negative-sample hallucinations in GPT-4o. All code, data, and evaluation tools are publicly released to foster reproducible research and community advancement.
📝 Abstract
Multimodal Large Language Models (MLLMs) have shown significant promise in various applications, leading to broad interest from researchers and practitioners alike. However, a comprehensive evaluation of their long-context capabilities remains underexplored. To address these gaps, we introduce the MultiModal Needle-in-a-haystack (MMNeedle) benchmark, specifically designed to assess the long-context capabilities of MLLMs. Besides multi-image input, we employ image stitching to further increase the input context length, and develop a protocol to automatically generate labels for sub-image level retrieval. Essentially, MMNeedle evaluates MLLMs by stress-testing their capability to locate a target sub-image (needle) within a set of images (haystack) based on textual instructions and descriptions of image contents. This setup necessitates an advanced understanding of extensive visual contexts and effective information retrieval within long-context image inputs. With this benchmark, we evaluate state-of-the-art MLLMs, encompassing both API-based and open-source models. The findings reveal that GPT-4o consistently surpasses other models in long-context scenarios, but suffers from hallucination problems in negative samples, i.e., when needles are not in the haystacks. Our comprehensive long-context evaluation of MLLMs also sheds lights on the considerable performance gap between API-based and open-source models. All the code, data, and instructions required to reproduce the main results are available at https://github.com/Wang-ML-Lab/multimodal-needle-in-a-haystack.