Document Haystack: A Long Context Multimodal Image/Document Understanding Vision LLM Benchmark

📅 2025-07-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (VLMs) lack systematic benchmarks for evaluating multimodal information localization in long-context, visually complex documents. To address this gap, we introduce the first benchmark specifically designed for ultra-long (5–200 pages), multimodal document understanding. Our approach leverages synthetic document generation coupled with controllable “needle” injection to embed critical information—either textual or image-text—within documents at adjustable depths. We further develop an automated evaluation framework that enables fine-grained, reproducible measurement of both long-range reasoning and cross-modal localization capabilities. The benchmark comprises 400 document variants and 8,250 questions. Extensive experiments reveal severe performance limitations across state-of-the-art VLMs, exposing fundamental bottlenecks in their ability to comprehend and reason over multimodal content in lengthy documents.

Technology Category

Application Category

📝 Abstract
The proliferation of multimodal Large Language Models has significantly advanced the ability to analyze and understand complex data inputs from different modalities. However, the processing of long documents remains under-explored, largely due to a lack of suitable benchmarks. To address this, we introduce Document Haystack, a comprehensive benchmark designed to evaluate the performance of Vision Language Models (VLMs) on long, visually complex documents. Document Haystack features documents ranging from 5 to 200 pages and strategically inserts pure text or multimodal text+image "needles" at various depths within the documents to challenge VLMs' retrieval capabilities. Comprising 400 document variants and a total of 8,250 questions, it is supported by an objective, automated evaluation framework. We detail the construction and characteristics of the Document Haystack dataset, present results from prominent VLMs and discuss potential research avenues in this area.
Problem

Research questions and friction points this paper is trying to address.

Lack of benchmarks for long document processing in VLMs
Need to evaluate VLMs on complex multimodal documents
Assessing retrieval capabilities of VLMs in lengthy contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Document Haystack benchmark for VLMs
Tests VLMs on long multimodal documents
Automated evaluation with 400 document variants