Reasoning Guided Embeddings: Leveraging MLLM Reasoning for Improved Multimodal Retrieval

๐Ÿ“… 2025-11-20
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing methods treat multimodal embedding as a direct encoding process, overlooking the generative reasoning capabilities inherent in multimodal large language models (MLLMs). To address this, we propose the **Reasoning-Guided Embedding (RGE)** frameworkโ€”the first to explicitly integrate structured reasoning from MLLMs into embedding learning. Specifically, RGE generates semantically rich reasoning texts conditioned on instructions and extracts multimodal representations from their contextual embeddings. Furthermore, we design a reasoning-guided contrastive learning objective to enhance semantic consistency and condition-awareness of the learned embeddings. Crucially, RGE introduces no additional parameters and requires no fine-tuning of the MLLM backbone. Evaluated on the MMEB cross-modal retrieval benchmark, RGE achieves a 4.9% improvement over non-reasoning baselines, demonstrating that explicit reasoning substantially enhances representation quality.

Technology Category

Application Category

๐Ÿ“ Abstract
Multimodal embeddings are widely used in downstream tasks such as multimodal retrieval, enabling alignment of interleaved modalities in a shared representation space. While recent studies show that Multimodal Large Language Models (MLLMs) can serve as strong embedding extractors, existing approaches treat embedding extraction as a direct encoding step, overlooking the fact that MLLMs possess the generative capability for reasoning that could be leveraged to enhance representation quality. In this work, we explore how to explicitly incorporate reasoning into the embedding process. To this end, we propose Reasoning Guided Embeddings (RGE), which preserves the generative rationale process of MLLMs and couples it with contrastive training. Our method first enables the model to perform structured rationale generation conditioned on the instruction, and then extracts representations after reasoning has unfolded. This simple design enhances the context-conditional inference signals within the embedding, leading to improved multimodal representation quality. Experiments on the MMEB benchmark show that reasoning-guided conditioning improves multimodal retrieval performance by 4.9% over the non-reasoning baseline, confirming that explicit reasoning can effectively enhance embedding quality.
Problem

Research questions and friction points this paper is trying to address.

Improving multimodal embeddings by incorporating MLLM reasoning capabilities
Enhancing representation quality through structured rationale generation
Boosting multimodal retrieval performance using reasoning-guided conditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates MLLM reasoning into embedding process
Uses structured rationale generation before representation extraction
Couples generative rationale with contrastive training
๐Ÿ”Ž Similar Papers
No similar papers found.