🤖 AI Summary
To address the high computational overhead and unstable reinforcement learning (RL) training when directly applying DeepSeek-R1 to multimodal retrieval, this paper proposes R1-MMR—a reasoning-driven multimodal retrieval framework. Methodologically, R1-MMR introduces (1) a lightweight information compression module coupled with a detail-checking mechanism to reduce candidate reasoning costs, and (2) retrieval-oriented synthetic chain-of-thought (CoT) data construction alongside a curriculum-based reward RL paradigm to enhance training stability and generalization. Through multi-stage collaborative optimization, R1-MMR significantly reduces token consumption in cross-modal retrieval tasks while improving robustness on complex samples. Empirically, it achieves state-of-the-art (SOTA) performance across multiple benchmarks, demonstrating both efficient inference and strong cross-domain adaptability.
📝 Abstract
The success of DeepSeek-R1 demonstrates the immense potential of using reinforcement learning (RL) to enhance LLMs' reasoning capabilities. This paper introduces Retrv-R1, the first R1-style MLLM specifically designed for multimodal universal retrieval, achieving higher performance by employing step-by-step reasoning to produce more accurate retrieval results. We find that directly applying the methods of DeepSeek-R1 to retrieval tasks is not feasible, mainly due to (1) the high computational cost caused by the large token consumption required for multiple candidates with reasoning processes, and (2) the instability and suboptimal results when directly applying RL to train for retrieval tasks. To address these issues, Retrv-R1 introduces an information compression module with a details inspection mechanism, which enhances computational efficiency by reducing the number of tokens while ensuring that critical information for challenging candidates is preserved. Furthermore, a new training paradigm is proposed, including an activation stage using a retrieval-tailored synthetic CoT dataset for more effective optimization, followed by RL with a novel curriculum reward to improve both performance and efficiency. Incorporating these novel designs, Retrv-R1 achieves SOTA performance, high efficiency, and strong generalization ability, as demonstrated by experiments across multiple benchmarks and tasks.