RzenEmbed: Towards Comprehensive Multimodal Retrieval

📅 2025-10-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing CLIP-based methods are primarily designed for natural images and exhibit limited generalization to critical modalities such as videos and visual documents. This paper introduces UniMR, the first unified multimodal retrieval framework supporting joint embedding of text, images, videos, and visual documents. Methodologically: (1) a two-stage training strategy is proposed to strengthen cross-modal semantic alignment; (2) a hardness-weighted mechanism coupled with an improved InfoNCE loss is introduced to suppress false negatives and noise, thereby enhancing discriminability on hard samples; (3) learnable temperature scaling and model souping are integrated to optimize representation learning. Evaluated on the MMEB benchmark, UniMR achieves state-of-the-art overall performance, with particularly significant gains in video and visual document retrieval—outperforming all existing methods by substantial margins.

Technology Category

Application Category

📝 Abstract
The rapid advancement of Multimodal Large Language Models (MLLMs) has extended CLIP-based frameworks to produce powerful, universal embeddings for retrieval tasks. However, existing methods primarily focus on natural images, offering limited support for other crucial visual modalities such as videos and visual documents. To bridge this gap, we introduce RzenEmbed, a unified framework to learn embeddings across a diverse set of modalities, including text, images, videos, and visual documents. We employ a novel two-stage training strategy to learn discriminative representations. The first stage focuses on foundational text and multimodal retrieval. In the second stage, we introduce an improved InfoNCE loss, incorporating two key enhancements. Firstly, a hardness-weighted mechanism guides the model to prioritize challenging samples by assigning them higher weights within each batch. Secondly, we implement an approach to mitigate the impact of false negatives and alleviate data noise. This strategy not only enhances the model's discriminative power but also improves its instruction-following capabilities. We further boost performance with learnable temperature parameter and model souping. RzenEmbed sets a new state-of-the-art on the MMEB benchmark. It not only achieves the best overall score but also outperforms all prior work on the challenging video and visual document retrieval tasks. Our models are available in https://huggingface.co/qihoo360/RzenEmbed.
Problem

Research questions and friction points this paper is trying to address.

Extends CLIP-based frameworks to support diverse visual modalities
Addresses limited multimodal retrieval beyond natural images
Improves discriminative power through enhanced training strategy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified embedding framework for multiple visual modalities
Two-stage training with improved InfoNCE loss
Hardness-weighted mechanism and false negative mitigation
🔎 Similar Papers
No similar papers found.
W
Weijian Jian
360 AI Research
Y
Yajun Zhang
360 AI Research
D
Dawei Liang
360 AI Research
Chunyu Xie
Chunyu Xie
Beihang University; 360 AI Research
Multimodal learningComputer visionMachine learning
Y
Yixiao He
360 AI Research
Dawei Leng
Dawei Leng
Dr.
Multimodal UnderstandingMultimodal GenerationVision and Language
Y
Yuhui Yin
360 AI Research