π€ AI Summary
Existing document-oriented RAG evaluation benchmarks heavily rely on synthetic data and suffer from narrow coverage, failing to reflect real-world bottlenecks. To address this, we introduce Double-Benchβthe first fully open-source, dynamically updatable, large-scale, multilingual, and multimodal evaluation benchmark for document RAG. It spans six languages, four categories of authentic documents (e.g., financial reports, legal contracts), over 70,000 manually annotated pages, and multi-hop queries, enabling fine-grained assessment across retrieval, evidence localization, and generation stages. Extensive experiments across nine embedding models, four multimodal large language models (MLLMs), and four RAG frameworks uncover two critical failure modes: evidence-agnostic answering and model overconfidence. Notably, we observe a significant narrowing of performance gaps between visual and textual embeddings. Double-Bench establishes a reproducible, scalable, and comprehensive evaluation infrastructure to advance robust, real-world document RAG systems.
π Abstract
Retrieval-Augmented Generation (RAG) systems using Multimodal Large Language Models (MLLMs) show great promise for complex document understanding, yet their development is critically hampered by inadequate evaluation. Current benchmarks often focus on specific part of document RAG system and use synthetic data with incomplete ground truth and evidence labels, therefore failing to reflect real-world bottlenecks and challenges. To overcome these limitations, we introduce Double-Bench: a new large-scale, multilingual, and multimodal evaluation system that is able to produce fine-grained assessment to each component within document RAG systems. It comprises 3,276 documents (72,880 pages) and 5,168 single- and multi-hop queries across 6 languages and 4 document types with streamlined dynamic update support for potential data contamination issues. Queries are grounded in exhaustively scanned evidence pages and verified by human experts to ensure maximum quality and completeness. Our comprehensive experiments across 9 state-of-the-art embedding models, 4 MLLMs and 4 end-to-end document RAG frameworks demonstrate the gap between text and visual embedding models is narrowing, highlighting the need in building stronger document retrieval models. Our findings also reveal the over-confidence dilemma within current document RAG frameworks that tend to provide answer even without evidence support. We hope our fully open-source Double-Bench provide a rigorous foundation for future research in advanced document RAG systems. We plan to retrieve timely corpus and release new benchmarks on an annual basis.