🤖 AI Summary
Existing multimodal retrieval benchmarks overemphasize textual knowledge utilization while neglecting the irreplaceable role of visual information in specific scenarios. Method: This work systematically identifies and categorizes nine distinct scenarios where visual knowledge demonstrably outperforms textual knowledge, and introduces MRAG-Bench—the first vision-centric multimodal Retrieval-Augmented Generation (RAG) benchmark—comprising 16,000 images and 1,353 human-annotated multiple-choice questions. We propose a visual–textual knowledge gain comparative analysis framework and a cross-model unified evaluation protocol. Contribution/Results: Experiments across 14 Large Vision-Language Models (LVLMs) show that image-based augmentation consistently surpasses text-based augmentation. However, GPT-4o achieves only a marginal +5.82% accuracy gain (vs. human performance of 33.16%), exposing severe bottlenecks in current LVLMs’ visual knowledge integration. These findings underscore an urgent need for vision-first RAG paradigms.
📝 Abstract
Existing multimodal retrieval benchmarks primarily focus on evaluating whether models can retrieve and utilize external textual knowledge for question answering. However, there are scenarios where retrieving visual information is either more beneficial or easier to access than textual data. In this paper, we introduce a multimodal retrieval-augmented generation benchmark, MRAG-Bench, in which we systematically identify and categorize scenarios where visually augmented knowledge is better than textual knowledge, for instance, more images from varying viewpoints. MRAG-Bench consists of 16,130 images and 1,353 human-annotated multiple-choice questions across 9 distinct scenarios. With MRAG-Bench, we conduct an evaluation of 10 open-source and 4 proprietary large vision-language models (LVLMs). Our results show that all LVLMs exhibit greater improvements when augmented with images compared to textual knowledge, confirming that MRAG-Bench is vision-centric. Additionally, we conduct extensive analysis with MRAG-Bench, which offers valuable insights into retrieval-augmented LVLMs. Notably, the top-performing model, GPT-4o, faces challenges in effectively leveraging retrieved knowledge, achieving only a 5.82% improvement with ground-truth information, in contrast to a 33.16% improvement observed in human participants. These findings highlight the importance of MRAG-Bench in encouraging the community to enhance LVLMs' ability to utilize retrieved visual knowledge more effectively.