🤖 AI Summary
Existing vision-language models (VLMs) lack systematic benchmarks for evaluating multimodal information localization in long-context, visually complex documents. To address this gap, we introduce the first benchmark specifically designed for ultra-long (5–200 pages), multimodal document understanding. Our approach leverages synthetic document generation coupled with controllable “needle” injection to embed critical information—either textual or image-text—within documents at adjustable depths. We further develop an automated evaluation framework that enables fine-grained, reproducible measurement of both long-range reasoning and cross-modal localization capabilities. The benchmark comprises 400 document variants and 8,250 questions. Extensive experiments reveal severe performance limitations across state-of-the-art VLMs, exposing fundamental bottlenecks in their ability to comprehend and reason over multimodal content in lengthy documents.
📝 Abstract
The proliferation of multimodal Large Language Models has significantly advanced the ability to analyze and understand complex data inputs from different modalities. However, the processing of long documents remains under-explored, largely due to a lack of suitable benchmarks. To address this, we introduce Document Haystack, a comprehensive benchmark designed to evaluate the performance of Vision Language Models (VLMs) on long, visually complex documents. Document Haystack features documents ranging from 5 to 200 pages and strategically inserts pure text or multimodal text+image "needles" at various depths within the documents to challenge VLMs' retrieval capabilities. Comprising 400 document variants and a total of 8,250 questions, it is supported by an objective, automated evaluation framework. We detail the construction and characteristics of the Document Haystack dataset, present results from prominent VLMs and discuss potential research avenues in this area.