MR$^2$-Bench: Going Beyond Matching to Reasoning in Multimodal Retrieval

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal retrieval benchmarks primarily assess superficial semantic matching (e.g., object–text alignment), failing to evaluate higher-order reasoning capabilities—such as logical, spatial, and causal inference. To address this gap, we propose MR²-Bench, the first benchmark explicitly designed to evaluate higher-order reasoning in multimodal retrieval. It comprises a diverse dataset spanning natural images, charts, and visual puzzles, synthesized from both human-crafted examples and curated public sources, enabling rigorous assessment under complex query scenarios. MR²-Bench establishes the first fully reasoning-driven evaluation framework for multimodal retrieval, significantly enhancing both ecological validity and task difficulty. Empirical results reveal severe limitations in state-of-the-art models: for instance, Seed1.6-Embedding’s Recall@1 drops dramatically from 77.78% to 9.91%, exposing a fundamental bottleneck in deep semantic understanding.

Technology Category

Application Category

📝 Abstract
Multimodal retrieval is becoming a crucial component of modern AI applications, yet its evaluation lags behind the demands of more realistic and challenging scenarios. Existing benchmarks primarily probe surface-level semantic correspondence (e.g., object-text matching) while failing to assess the deeper reasoning required to capture complex relationships between visual and textual information. To address this gap, we introduce MR$^2$-Bench, a reasoning-intensive benchmark for multimodal retrieval. MR$^2$-Bench presents the following critical values: 1) all tasks are reasoning-driven, going beyond shallow matching to effectively assess models' capacity for logical, spatial, and causal inference; 2) it features diverse multimodal data, such as natural images, diagrams, and visual puzzles, enabling comprehensive evaluation across content types; 3) it supports complex queries and documents containing multiple images and covers diverse retrieval scenarios, more accurately reflecting real-world applications. Our benchmark contains 1,309 curated queries, derived either from manual collection and annotation or from selective consolidation of public datasets. Despite achieving strong results on existing benchmarks, current state-of-the-art models still struggle on MR$^2$-Bench: for example, the leading Seed1.6-Embedding model attains a Recall@1 of 77.78 on MMEB, but only 9.91 on MR$^2$-Bench. This substantial performance gap highlights both the increased challenge posed by our benchmark and the pressing need for further advances in reasoning-intensive multimodal retrieval. The dataset and evaluation code will be made publicly available at https://github.com/VectorSpaceLab/MR2-Bench.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in evaluating multimodal retrieval beyond surface-level matching
Assesses deeper reasoning capabilities for complex visual-textual relationships
Provides comprehensive evaluation across diverse content types and retrieval scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces reasoning-intensive benchmark for multimodal retrieval
Features diverse data types including images and diagrams
Supports complex queries with multiple images and scenarios
🔎 Similar Papers
No similar papers found.
Junjie Zhou
Junjie Zhou
Nanjing University
Computer VisionMachine Learning
Z
Ze Liu
University of Science and Technology of China
Lei Xiong
Lei Xiong
Stanford University
AI + BiologyComputational BiologyDeep LearningSingle Cell
J
Jin-Ge Yao
Beijing Academy of Artificial Intelligence
Yueze Wang
Yueze Wang
Beijing Academy of Artificial Intelligence (BAAI)
MultimodalData-centric AI
Shitao Xiao
Shitao Xiao
BUPT
F
Fenfen Lin
Beijing Academy of Artificial Intelligence
M
Miguel Hu Chen
Beijing Academy of Artificial Intelligence
Zhicheng Dou
Zhicheng Dou
Renmin University of China
Information RetrievalRetrieval Augmented GenerationLarge Language ModelsGenerative IR
Siqi Bao
Siqi Bao
Baidu
Natural Language ProcessingMedical Image Analysis
D
Defu Lian
University of Science and Technology of China
Y
Yongping Xiong
Beijing University of Posts and Telecommunications
Z
Zheng Liu
Beijing Academy of Artificial Intelligence