๐ค AI Summary
Existing methods treat multimodal embedding as a direct encoding process, overlooking the generative reasoning capabilities inherent in multimodal large language models (MLLMs). To address this, we propose the **Reasoning-Guided Embedding (RGE)** frameworkโthe first to explicitly integrate structured reasoning from MLLMs into embedding learning. Specifically, RGE generates semantically rich reasoning texts conditioned on instructions and extracts multimodal representations from their contextual embeddings. Furthermore, we design a reasoning-guided contrastive learning objective to enhance semantic consistency and condition-awareness of the learned embeddings. Crucially, RGE introduces no additional parameters and requires no fine-tuning of the MLLM backbone. Evaluated on the MMEB cross-modal retrieval benchmark, RGE achieves a 4.9% improvement over non-reasoning baselines, demonstrating that explicit reasoning substantially enhances representation quality.
๐ Abstract
Multimodal embeddings are widely used in downstream tasks such as multimodal retrieval, enabling alignment of interleaved modalities in a shared representation space. While recent studies show that Multimodal Large Language Models (MLLMs) can serve as strong embedding extractors, existing approaches treat embedding extraction as a direct encoding step, overlooking the fact that MLLMs possess the generative capability for reasoning that could be leveraged to enhance representation quality. In this work, we explore how to explicitly incorporate reasoning into the embedding process. To this end, we propose Reasoning Guided Embeddings (RGE), which preserves the generative rationale process of MLLMs and couples it with contrastive training. Our method first enables the model to perform structured rationale generation conditioned on the instruction, and then extracts representations after reasoning has unfolded. This simple design enhances the context-conditional inference signals within the embedding, leading to improved multimodal representation quality. Experiments on the MMEB benchmark show that reasoning-guided conditioning improves multimodal retrieval performance by 4.9% over the non-reasoning baseline, confirming that explicit reasoning can effectively enhance embedding quality.