🤖 AI Summary
Existing multimodal retrieval benchmarks primarily assess superficial semantic matching (e.g., object–text alignment), failing to evaluate higher-order reasoning capabilities—such as logical, spatial, and causal inference. To address this gap, we propose MR²-Bench, the first benchmark explicitly designed to evaluate higher-order reasoning in multimodal retrieval. It comprises a diverse dataset spanning natural images, charts, and visual puzzles, synthesized from both human-crafted examples and curated public sources, enabling rigorous assessment under complex query scenarios. MR²-Bench establishes the first fully reasoning-driven evaluation framework for multimodal retrieval, significantly enhancing both ecological validity and task difficulty. Empirical results reveal severe limitations in state-of-the-art models: for instance, Seed1.6-Embedding’s Recall@1 drops dramatically from 77.78% to 9.91%, exposing a fundamental bottleneck in deep semantic understanding.
📝 Abstract
Multimodal retrieval is becoming a crucial component of modern AI applications, yet its evaluation lags behind the demands of more realistic and challenging scenarios. Existing benchmarks primarily probe surface-level semantic correspondence (e.g., object-text matching) while failing to assess the deeper reasoning required to capture complex relationships between visual and textual information. To address this gap, we introduce MR$^2$-Bench, a reasoning-intensive benchmark for multimodal retrieval. MR$^2$-Bench presents the following critical values: 1) all tasks are reasoning-driven, going beyond shallow matching to effectively assess models' capacity for logical, spatial, and causal inference; 2) it features diverse multimodal data, such as natural images, diagrams, and visual puzzles, enabling comprehensive evaluation across content types; 3) it supports complex queries and documents containing multiple images and covers diverse retrieval scenarios, more accurately reflecting real-world applications. Our benchmark contains 1,309 curated queries, derived either from manual collection and annotation or from selective consolidation of public datasets. Despite achieving strong results on existing benchmarks, current state-of-the-art models still struggle on MR$^2$-Bench: for example, the leading Seed1.6-Embedding model attains a Recall@1 of 77.78 on MMEB, but only 9.91 on MR$^2$-Bench. This substantial performance gap highlights both the increased challenge posed by our benchmark and the pressing need for further advances in reasoning-intensive multimodal retrieval. The dataset and evaluation code will be made publicly available at https://github.com/VectorSpaceLab/MR2-Bench.