ReMatch: Boosting Representation through Matching for Multimodal Retrieval

πŸ“… 2025-11-24
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current multimodal retrieval methods treat multimodal large language models (MLLMs) solely as static encoders, neglecting their generative capacity, compositional reasoning, and world knowledge. To address this limitation, we propose a generative matching framework comprising three key components: (1) an end-to-end trainable autoregressive relevance discrimination module that leverages multi-view inputs to provide instance-level discriminative supervision and enhance hard negative learning; (2) learnable token expansion to enrich input representations, yielding contextually grounded and orthogonal multimodal embeddings; and (3) joint optimization of contrastive loss and generative matching loss. Evaluated on the MMEB benchmark, our method achieves state-of-the-art performance and demonstrates strong zero-shot generalization across five diverse datasets. These results empirically validate the effectiveness and transferability of generative modeling for fine-grained multimodal semantic alignment.

Technology Category

Application Category

πŸ“ Abstract
We present ReMatch, a framework that leverages the generative strength of MLLMs for multimodal retrieval. Previous approaches treated an MLLM as a simple encoder, ignoring its generative nature, and under-utilising its compositional reasoning and world knowledge. We instead train the embedding MLLM end-to-end with a chat-style generative matching stage. The matching stage uses the same MLLM to autoregressively decide relevance from multi-view inputs, including both raw data and its own projected embeddings for each query and document. It provides instance-wise discrimination supervision that complements a standard contrastive loss, offering stronger gradients on hard negatives and preserving the compositional strengths of the original MLLM. To obtain semantically richer multimodal embeddings, we use multiple learnable tokens to augment each input, generating fine-grained contextual, mutually orthogonal embeddings with low inference cost. Leveraging our established high-performance baseline,we assemble the ideas mentioned above into a powerful training recipe and achieve a new state-of-the-art on the Massive Multimodal Embedding Benchmark (MMEB). Our experiments show particularly strong zero-shot generalization results on five datasets, highlighting the robustness and transferability of ReMatch.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multimodal retrieval by leveraging MLLMs' generative capabilities
Improving embedding quality through end-to-end training with generative matching
Achieving state-of-the-art performance on multimodal embedding benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end training with generative matching stage
Multiple learnable tokens for richer embeddings
Autoregressive relevance decision from multi-view inputs
Q
Qianying Liu
University of Glasgow, Glasgow, UK
X
Xiao Liang
Xiaohongshu Inc., Beijing, China
Z
Zhiqiang Zhang
University of Glasgow, Glasgow, UK
Y
Yibo Chen
Xiaohongshu Inc., Beijing, China
X
Xu Tang
Xiaohongshu Inc., Beijing, China
Z
Zhongfei Qing
Xiaohongshu Inc., Beijing, China
Fengfan Zhou
Fengfan Zhou
εŽδΈ­η§‘ζŠ€ε€§ε­¦
ε―ΉζŠ—ζ ·ζœ¬
Yao Hu
Yao Hu
ζ΅™ζ±Ÿε€§ε­¦
Machine Learning
Paul Henderson
Paul Henderson
University of Glasgow
computer visionmachine learning