VisR-Bench: An Empirical Study on Visual Retrieval-Augmented Generation for Multilingual Long Document Understanding

πŸ“… 2025-08-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing benchmarks are largely confined to English single-page documents or unimodal question answering, lacking comprehensive evaluation for multilingual long-document visual retrieval. Method: We introduce VisR-Benchβ€”the first multimodal, visual retrieval-augmented generation benchmark supporting 16 languages, encompassing 1.2K long documents and 35K high-quality QA pairs, enabling fine-grained cross-modal retrieval evaluation over charts, text, and tables. It innovatively incorporates answer-agnostic queries to mitigate keyword-matching bias and better reflect real-world scenarios. Contribution/Results: End-to-end evaluation across text retrievers, multimodal encoders, and multimodal large language models (MLLMs) shows MLLMs significantly outperform traditional methods; however, performance degrades notably on low-resource languages and structured table reasoning, exposing critical bottlenecks in current multilingual long-document visual retrieval capabilities and highlighting key directions for future advancement.

Technology Category

Application Category

πŸ“ Abstract
Most organizational data in this world are stored as documents, and visual retrieval plays a crucial role in unlocking the collective intelligence from all these documents. However, existing benchmarks focus on English-only document retrieval or only consider multilingual question-answering on a single-page image. To bridge this gap, we introduce VisR-Bench, a multilingual benchmark designed for question-driven multimodal retrieval in long documents. Our benchmark comprises over 35K high-quality QA pairs across 1.2K documents, enabling fine-grained evaluation of multimodal retrieval. VisR-Bench spans sixteen languages with three question types (figures, text, and tables), offering diverse linguistic and question coverage. Unlike prior datasets, we include queries without explicit answers, preventing models from relying on superficial keyword matching. We evaluate various retrieval models, including text-based methods, multimodal encoders, and MLLMs, providing insights into their strengths and limitations. Our results show that while MLLMs significantly outperform text-based and multimodal encoder models, they still struggle with structured tables and low-resource languages, highlighting key challenges in multilingual visual retrieval.
Problem

Research questions and friction points this paper is trying to address.

Multilingual visual retrieval for long documents
Evaluating multimodal retrieval in diverse languages
Addressing challenges in structured data and low-resource languages
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual benchmark for visual retrieval
Diverse QA pairs across long documents
Evaluation of multimodal retrieval models
πŸ”Ž Similar Papers
No similar papers found.
J
Jian Chen
University at Buffalo
M
Ming Li
University of Maryland
Jihyung Kil
Jihyung Kil
Adobe Research
GUI/Computer-Using AgentAI AgentEmbodied AgentVision and Language
C
Chenguang Wang
Tong Yu
Tong Yu
Adobe Research
R
Ryan Rossi
Adobe Research
T
Tianyi Zhou
University of Maryland
C
Changyou Chen
University at Buffalo
R
Ruiyi Zhang
Adobe Research