MRAMG-Bench: A BeyondText Benchmark for Multimodal Retrieval-Augmented Multimodal Generation

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal Retrieval-Augmented Multimodal Generation (MRAMG) lacks systematic evaluation. Method: This paper formally defines MRAMG as generating multimodal (text-and-image) answers grounded in joint text-image retrieval results. To enable rigorous assessment, we introduce MRAMG-Bench—the first large-scale, human-annotated benchmark comprising 4.3K documents, 14.2K images, and 4.8K cross-domain QA pairs—supporting multi-granularity difficulty categorization and multi-image scenario evaluation. We propose an LLM-MLM collaborative generation framework and establish a dual-track evaluation protocol combining statistical metrics and LLM-as-judge assessment. Contribution/Results: MRAMG-Bench is publicly released. Experiments reveal substantial limitations of existing models in multimodal answer generation; our framework achieves significant improvements in multi-image reference accuracy and textual coherence.

Technology Category

Application Category

📝 Abstract
Recent advancements in Retrieval-Augmented Generation (RAG) have shown remarkable performance in enhancing response accuracy and relevance by integrating external knowledge into generative models. However, existing RAG methods primarily focus on providing text-only answers, even in multimodal retrieval-augmented generation scenarios. In this work, we introduce the Multimodal Retrieval-Augmented Multimodal Generation (MRAMG) task, which aims to generate answers that combine both text and images, fully leveraging the multimodal data within a corpus. Despite the importance of this task, there is a notable absence of a comprehensive benchmark to effectively evaluate MRAMG performance. To bridge this gap, we introduce the MRAMG-Bench, a carefully curated, human-annotated dataset comprising 4,346 documents, 14,190 images, and 4,800 QA pairs, sourced from three categories: Web Data, Academic Papers, and Lifestyle. The dataset incorporates diverse difficulty levels and complex multi-image scenarios, providing a robust foundation for evaluating multimodal generation tasks. To facilitate rigorous evaluation, our MRAMG-Bench incorporates a comprehensive suite of both statistical and LLM-based metrics, enabling a thorough analysis of the performance of popular generative models in the MRAMG task. Besides, we propose an efficient multimodal answer generation framework that leverages both LLMs and MLLMs to generate multimodal responses. Our datasets are available at: https://huggingface.co/MRAMG.
Problem

Research questions and friction points this paper is trying to address.

Multimodal retrieval-augmented generation evaluation
Lack of comprehensive benchmark for MRAMG
Development of multimodal answer generation framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multimodal Retrieval-Augmented Generation
MRAMG-Bench dataset
LLMs and MLLMs framework
🔎 Similar Papers
No similar papers found.