🤖 AI Summary
This work addresses the challenge of pretraining inconsistency in multimodal information retrieval caused by reliance on large-scale training data and fine-tuning. To circumvent these limitations, the authors propose a two-stage prompting framework that requires neither training nor external data: it first retrieves a coarse set of top-k candidates and then employs a vision-enhanced multimodal large language model (MLLM) for fine-grained relevance scoring. Notably, this approach is the first to apply MLLMs directly to end-to-end retrieval without task-specific adaptation. Extensive experiments demonstrate that the method outperforms fine-tuned baselines across multiple benchmarks, highlighting the inherent cross-modal reasoning capabilities of modern MLLMs and significantly improving the utilization of critical visual information.
📝 Abstract
Multimodal information retrieval (MMIR) has gained attention for its flexibility in handling text, images, or mixed queries and candidates. Recent breakthroughs in multimodal large language models (MLLMs) boost MMIR performance by incorporating MLLM knowledge under the contrastive finetuning framework. However, they suffer from pre-training inconsistency and require large datasets. In this work, we introduce a novel framework, RetLLM, designed to query MLLMs for MMIR in a training- and data-free manner. Specifically, we formulate MMIR as a similarity score generation task and prompt MLLMs to directly predict retrieval scores in a coarse-then-fine pipeline. At the coarse stage, a top-k filtering strategy builds a small yet high-quality candidate pool for each query, enabling MLLMs to focus on semantically relevant candidates. Subsequently, the retrieval score is predicted by feeding both the query and candidate into MLLMs at the fine stage. Importantly, we propose a visual enhancement module during reasoning to help MLLMs re-pick forgotten visuals, improving retrieval. Extensive experiments on MMIR benchmarks show that RetLLM outperforms fine-tuned models. Ablation studies further verify each component. Our work demonstrates that MLLMs can achieve strong MMIR performance without any training, highlighting their inherent multimodal reasoning ability in a simple, scalable framework. We release our code at: https://github.com/alivecat05/RETLLM