MMEmb-R1: Reasoning-Enhanced Multimodal Embedding with Pair-Aware Selection and Adaptive Control

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Directly incorporating reasoning into multimodal embeddings often leads to structural misalignment and computational redundancy, making it challenging to balance performance and efficiency. This work proposes an adaptive reasoning-augmented multimodal embedding framework that formulates reasoning as a latent variable. The framework dynamically decides whether to activate reasoning through pairwise perception-based selection and reinforcement learning, while employing counterfactual interventions to identify effective reasoning pathways for on-demand invocation. By integrating contrastive learning with multimodal large language models, the method achieves a state-of-the-art score of 71.2 on the MMEB-V2 benchmark with only 4 billion parameters, significantly reducing inference overhead and latency.
📝 Abstract
MLLMs have been successfully applied to multimodal embedding tasks, yet their generative reasoning capabilities remain underutilized. Directly incorporating chain-of-thought reasoning into embedding learning introduces two fundamental challenges. First, structural misalignment between instance-level reasoning and pairwise contrastive supervision may lead to shortcut behavior, where the model merely learns the superficial format of reasoning. Second, reasoning is not universally beneficial for embedding tasks. Enforcing reasoning for all inputs may introduce unnecessary computation and latency, and can even obscure salient semantic signals for simple cases. To address these issues, we propose MMEmb-R1, an adaptive reasoning-based multimodal embedding framework. We formulate reasoning as a latent variable and introduce pair-aware reasoning selection that employs counterfactual intervention to identify reasoning paths beneficial for query-target alignment. Furthermore, we adopt reinforcement learning to selectively invoke reasoning only when necessary. Experiments on the MMEB-V2 benchmark demonstrate that our model achieves a score of 71.2 with only 4B parameters, establishing a new state-of-the-art while significantly reducing reasoning overhead and inference latency.
Problem

Research questions and friction points this paper is trying to address.

multimodal embedding
reasoning
chain-of-thought
contrastive learning
computational overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

reasoning-enhanced embedding
pair-aware selection
adaptive reasoning control
counterfactual intervention
reinforcement learning
🔎 Similar Papers
No similar papers found.
Yuchi Wang
Yuchi Wang
CUHK MMLab; Peking Uninversity
MultimodalityVLMGenerative Models
H
Haiyang Yu
ByteDance
W
Weikang Bian
MMLab, The Chinese University of Hong Kong
J
Jiefeng Long
ByteDance
Xiao Liang
Xiao Liang
Bytedance
MLLM Recommendation Systems Multimodal Representation
Chao Feng
Chao Feng
University of Zurich
networkmachine learningcybersecurity
H
Hongsheng Li
MMLab, The Chinese University of Hong Kong