🤖 AI Summary
This work addresses the limitations of conventional multimodal embedding methods, which rely on single-round contrastive learning, suffer from low computational efficiency, and neglect contextual relationships among multiple queries. To overcome these issues, the authors propose MuCo, a multi-round contrastive learning framework that introduces a conversational mechanism into multimodal contrastive learning for the first time. Leveraging multimodal large language models (MLLMs), MuCo processes multiple query-target pairs associated with the same image in a single forward pass, enabling context-aware batched embedding generation. This approach substantially improves training efficiency, cross-modal alignment, and representation consistency. Empirical results demonstrate state-of-the-art retrieval performance on the MMEB and M-BEIR benchmarks, accompanied by the release of M3T, a large-scale multimodal multi-turn dataset.
📝 Abstract
Universal Multimodal embedding models built on Multimodal Large Language Models (MLLMs) have traditionally employed contrastive learning, which aligns representations of query-target pairs across different modalities. Yet, despite its empirical success, they are primarily built on a"single-turn"formulation where each query-target pair is treated as an independent data point. This paradigm leads to computational inefficiency when scaling, as it requires a separate forward pass for each pair and overlooks potential contextual relationships between multiple queries that can relate to the same context. In this work, we introduce Multi-Turn Contrastive Learning (MuCo), a dialogue-inspired framework that revisits this process. MuCo leverages the conversational nature of MLLMs to process multiple, related query-target pairs associated with a single image within a single forward pass. This allows us to extract a set of multiple query and target embeddings simultaneously, conditioned on a shared context representation, amplifying the effective batch size and overall training efficiency. Experiments exhibit MuCo with a newly curated 5M multimodal multi-turn dataset (M3T), which yields state-of-the-art retrieval performance on MMEB and M-BEIR benchmarks, while markedly enhancing both training efficiency and representation coherence across modalities. Code and M3T are available at https://github.com/naver-ai/muco