UNIDOC-BENCH: A Unified Benchmark for Document-Centric Multimodal RAG

📅 2025-10-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MM-RAG evaluations are fragmented and oversimplified, failing to reflect document-centric real-world multimodal scenarios. Method: We introduce UniDoc-Bench, the first unified benchmark tailored for PDF documents, covering eight realistic document categories and supporting joint question answering over text, tables, and images while accommodating four multimodal RAG paradigms. QA pairs are constructed via multimodal information extraction, evidence linking, and human verification; standardized candidate pools, prompt templates, and evaluation protocols ensure fair comparison. Contribution/Results: Experiments reveal—for the first time—the complementary role of visual context in textual retrieval and expose systematic limitations in prevailing multimodal embeddings. We further demonstrate the substantial advantage of text–image fusion RAG for complex reasoning, providing empirical grounding and actionable design principles for robust MM-RAG systems.

Technology Category

Application Category

📝 Abstract
Multimodal retrieval-augmented generation (MM-RAG) is a key approach for applying large language models (LLMs) and agents to real-world knowledge bases, yet current evaluations are fragmented, focusing on either text or images in isolation or on simplified multimodal setups that fail to capture document-centric multimodal use cases. In this paper, we introduce UniDoc-Bench, the first large-scale, realistic benchmark for MM-RAG built from 70k real-world PDF pages across eight domains. Our pipeline extracts and links evidence from text, tables, and figures, then generates 1,600 multimodal QA pairs spanning factual retrieval, comparison, summarization, and logical reasoning queries. To ensure reliability, 20% of QA pairs are validated by multiple annotators and expert adjudication. UniDoc-Bench supports apples-to-apples comparison across four paradigms: (1) text-only, (2) image-only, (3) multimodal text-image fusion, and (4) multimodal joint retrieval -- under a unified protocol with standardized candidate pools, prompts, and evaluation metrics. Our experiments show that multimodal text-image fusion RAG systems consistently outperform both unimodal and jointly multimodal embedding-based retrieval, indicating that neither text nor images alone are sufficient and that current multimodal embeddings remain inadequate. Beyond benchmarking, our analysis reveals when and how visual context complements textual evidence, uncovers systematic failure modes, and offers actionable guidance for developing more robust MM-RAG pipelines.
Problem

Research questions and friction points this paper is trying to address.

Fragmented evaluations of multimodal retrieval-augmented generation systems
Lack of realistic benchmarks for document-centric multimodal RAG use cases
Inadequacy of current multimodal embeddings for complex document understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extracts and links evidence from text, tables, and figures
Generates multimodal QA pairs for diverse reasoning tasks
Supports unified comparison of four retrieval paradigms
🔎 Similar Papers
No similar papers found.
X
Xiangyu Peng
Salesforce AI Research
Can Qin
Can Qin
Salesforce
Computer VisionMachine LearningDeep Learning
Z
Zeyuan Chen
Salesforce AI Research
R
Ran Xu
Salesforce AI Research
Caiming Xiong
Caiming Xiong
Salesforce Research
Machine LearningNLPComputer VisionMultimediaData Mining
C
Chien-Sheng Wu
Salesforce AI Research