🤖 AI Summary
Existing retrieval-augmented generation (RAG) evaluation benchmarks struggle to address real-world challenges such as multi-document synthesis, visual understanding, and fine-grained source attribution. To bridge this gap, this work introduces the first comprehensive multimodal RAG benchmark that integrates visual content, cross-document reasoning, and multilingual support. The benchmark comprises 26,000 visually rich document pages spanning ten specialized domains and 3,099 human-verified queries, accompanied by high-quality annotations for retrieval relevance, bounding box localization, and reference answers. Systematic evaluation reveals that vision-aware retrievers significantly outperform purely text-based approaches, and late interaction with re-ranking further enhances performance. Nevertheless, current models still exhibit notable deficiencies in interpreting non-textual elements, answering open-ended questions, and achieving fine-grained visual grounding.
📝 Abstract
Retrieval-Augmented Generation (RAG) pipelines must address challenges beyond simple single-document retrieval, such as interpreting visual elements (tables, charts, images), synthesizing information across documents, and providing accurate source grounding. Existing benchmarks fail to capture this complexity, often focusing on textual data, single-document comprehension, or evaluating retrieval and generation in isolation. We introduce ViDoRe v3, a comprehensive multimodal RAG benchmark featuring multi-type queries over visually rich document corpora. It covers 10 datasets across diverse professional domains, comprising ~26,000 document pages paired with 3,099 human-verified queries, each available in 6 languages. Through 12,000 hours of human annotation effort, we provide high-quality annotations for retrieval relevance, bounding box localization, and verified reference answers. Our evaluation of state-of-the-art RAG pipelines reveals that visual retrievers outperform textual ones, late-interaction models and textual reranking substantially improve performance, and hybrid or purely visual contexts enhance answer generation quality. However, current models still struggle with non-textual elements, open-ended queries, and fine-grained visual grounding. To encourage progress in addressing these challenges, the benchmark is released under a commercially permissive license at https://hf.co/vidore.