π€ AI Summary
Current multimodal retrieval methods treat multimodal large language models (MLLMs) solely as static encoders, neglecting their generative capacity, compositional reasoning, and world knowledge. To address this limitation, we propose a generative matching framework comprising three key components: (1) an end-to-end trainable autoregressive relevance discrimination module that leverages multi-view inputs to provide instance-level discriminative supervision and enhance hard negative learning; (2) learnable token expansion to enrich input representations, yielding contextually grounded and orthogonal multimodal embeddings; and (3) joint optimization of contrastive loss and generative matching loss. Evaluated on the MMEB benchmark, our method achieves state-of-the-art performance and demonstrates strong zero-shot generalization across five diverse datasets. These results empirically validate the effectiveness and transferability of generative modeling for fine-grained multimodal semantic alignment.
π Abstract
We present ReMatch, a framework that leverages the generative strength of MLLMs for multimodal retrieval. Previous approaches treated an MLLM as a simple encoder, ignoring its generative nature, and under-utilising its compositional reasoning and world knowledge. We instead train the embedding MLLM end-to-end with a chat-style generative matching stage. The matching stage uses the same MLLM to autoregressively decide relevance from multi-view inputs, including both raw data and its own projected embeddings for each query and document. It provides instance-wise discrimination supervision that complements a standard contrastive loss, offering stronger gradients on hard negatives and preserving the compositional strengths of the original MLLM. To obtain semantically richer multimodal embeddings, we use multiple learnable tokens to augment each input, generating fine-grained contextual, mutually orthogonal embeddings with low inference cost. Leveraging our established high-performance baseline,we assemble the ideas mentioned above into a powerful training recipe and achieve a new state-of-the-art on the Massive Multimodal Embedding Benchmark (MMEB). Our experiments show particularly strong zero-shot generalization results on five datasets, highlighting the robustness and transferability of ReMatch.