RETLLM: Training and Data-Free MLLMs for Multimodal Information Retrieval

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of pretraining inconsistency in multimodal information retrieval caused by reliance on large-scale training data and fine-tuning. To circumvent these limitations, the authors propose a two-stage prompting framework that requires neither training nor external data: it first retrieves a coarse set of top-k candidates and then employs a vision-enhanced multimodal large language model (MLLM) for fine-grained relevance scoring. Notably, this approach is the first to apply MLLMs directly to end-to-end retrieval without task-specific adaptation. Extensive experiments demonstrate that the method outperforms fine-tuned baselines across multiple benchmarks, highlighting the inherent cross-modal reasoning capabilities of modern MLLMs and significantly improving the utilization of critical visual information.

Technology Category

Application Category

📝 Abstract
Multimodal information retrieval (MMIR) has gained attention for its flexibility in handling text, images, or mixed queries and candidates. Recent breakthroughs in multimodal large language models (MLLMs) boost MMIR performance by incorporating MLLM knowledge under the contrastive finetuning framework. However, they suffer from pre-training inconsistency and require large datasets. In this work, we introduce a novel framework, RetLLM, designed to query MLLMs for MMIR in a training- and data-free manner. Specifically, we formulate MMIR as a similarity score generation task and prompt MLLMs to directly predict retrieval scores in a coarse-then-fine pipeline. At the coarse stage, a top-k filtering strategy builds a small yet high-quality candidate pool for each query, enabling MLLMs to focus on semantically relevant candidates. Subsequently, the retrieval score is predicted by feeding both the query and candidate into MLLMs at the fine stage. Importantly, we propose a visual enhancement module during reasoning to help MLLMs re-pick forgotten visuals, improving retrieval. Extensive experiments on MMIR benchmarks show that RetLLM outperforms fine-tuned models. Ablation studies further verify each component. Our work demonstrates that MLLMs can achieve strong MMIR performance without any training, highlighting their inherent multimodal reasoning ability in a simple, scalable framework. We release our code at: https://github.com/alivecat05/RETLLM
Problem

Research questions and friction points this paper is trying to address.

Multimodal Information Retrieval
Multimodal Large Language Models
Training-Free
Data-Free
Pre-training Inconsistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

training-free
data-free
multimodal information retrieval
multimodal large language models
visual enhancement
🔎 Similar Papers
No similar papers found.
D
Dawei Su
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China
Dongsheng Wang
Dongsheng Wang
Assistant professor at Shenzhen University
Machine LearningBayesian StatisticsDeep Learning