ViDoRe V3: A Comprehensive Evaluation of Retrieval Augmented Generation in Complex Real-World Scenarios

📅 2026-01-13
📈 Citations: 3
Influential: 1
📄 PDF
🤖 AI Summary
Existing retrieval-augmented generation (RAG) evaluation benchmarks struggle to address real-world challenges such as multi-document synthesis, visual understanding, and fine-grained source attribution. To bridge this gap, this work introduces the first comprehensive multimodal RAG benchmark that integrates visual content, cross-document reasoning, and multilingual support. The benchmark comprises 26,000 visually rich document pages spanning ten specialized domains and 3,099 human-verified queries, accompanied by high-quality annotations for retrieval relevance, bounding box localization, and reference answers. Systematic evaluation reveals that vision-aware retrievers significantly outperform purely text-based approaches, and late interaction with re-ranking further enhances performance. Nevertheless, current models still exhibit notable deficiencies in interpreting non-textual elements, answering open-ended questions, and achieving fine-grained visual grounding.

Technology Category

Application Category

📝 Abstract
Retrieval-Augmented Generation (RAG) pipelines must address challenges beyond simple single-document retrieval, such as interpreting visual elements (tables, charts, images), synthesizing information across documents, and providing accurate source grounding. Existing benchmarks fail to capture this complexity, often focusing on textual data, single-document comprehension, or evaluating retrieval and generation in isolation. We introduce ViDoRe v3, a comprehensive multimodal RAG benchmark featuring multi-type queries over visually rich document corpora. It covers 10 datasets across diverse professional domains, comprising ~26,000 document pages paired with 3,099 human-verified queries, each available in 6 languages. Through 12,000 hours of human annotation effort, we provide high-quality annotations for retrieval relevance, bounding box localization, and verified reference answers. Our evaluation of state-of-the-art RAG pipelines reveals that visual retrievers outperform textual ones, late-interaction models and textual reranking substantially improve performance, and hybrid or purely visual contexts enhance answer generation quality. However, current models still struggle with non-textual elements, open-ended queries, and fine-grained visual grounding. To encourage progress in addressing these challenges, the benchmark is released under a commercially permissive license at https://hf.co/vidore.
Problem

Research questions and friction points this paper is trying to address.

Retrieval-Augmented Generation
multimodal RAG
visual grounding
cross-document synthesis
complex real-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-Augmented Generation
Multimodal Benchmark
Visual Document Understanding
Source Grounding
Late-Interaction Retrieval
🔎 Similar Papers
No similar papers found.
A
António Loison
Illuin Technology
Q
Quentin Macé
Illuin Technology
A
Antoine Edy
Illuin Technology
V
Victor Xing
Illuin Technology
T
Tom Balough
NVIDIA
G
Gabriel Moreira
NVIDIA
B
Bo Liu
NVIDIA
Manuel Faysse
Manuel Faysse
CentraleSupélec - Université Paris Saclay
Natural Language ProcessingMachine LearningPrivacy
C
Céline Hudelot
CentraleSupélec, Paris-Saclay
G
Gautier Viaud
Illuin Technology