VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal embedding models (e.g., VLM2Vec, E5-V, GME) primarily target natural images, offering limited support for video and visual documents—hindering their applicability in AI agents, multimodal search, and retrieval-augmented generation (RAG). To address this gap, we propose VLM2Vec-V2, a unified cross-modal embedding framework that, for the first time, systematically supports four modalities: text, image, video, and visual documents. It leverages large-scale vision-language models, multi-granularity semantic alignment via contrastive learning, and a shared encoder architecture to enable joint representation learning. Complementing this, we introduce MMEB-V2—a new benchmark covering video and document-centric tasks, filling a critical evaluation void. Experiments demonstrate that VLM2Vec-V2 achieves state-of-the-art performance on novel modality tasks and matches or exceeds prior art on standard image benchmarks, exhibiting strong generalization and practical utility.

Technology Category

Application Category

📝 Abstract
Multimodal embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering over different modalities. However, existing multimodal embeddings like VLM2Vec, E5-V, GME are predominantly focused on natural images, with limited support for other visual forms such as videos and visual documents. This restricts their applicability in real-world scenarios, including AI agents, multi-modal search and recommendation, and retrieval-augmented generation (RAG). To close this gap, we propose VLM2Vec-V2, a unified framework for learning embeddings across diverse visual forms. First, we introduce MMEB-V2, a comprehensive benchmark that extends MMEB with five new task types: visual document retrieval, video retrieval, temporal grounding, video classification and video question answering - spanning text, image, video, and visual document inputs. Next, we train VLM2Vec-V2, a general-purpose embedding model that supports text, image, video, and visual document inputs. Extensive experiments show that VLM2Vec-V2 achieves strong performance not only on the newly introduced video and document retrieval tasks, but also improves over prior baselines on the original image benchmarks. Through extensive evaluation, our study offers insights into the generalizability of various multimodal embedding models and highlights effective strategies for unified embedding learning, laying the groundwork for more scalable and adaptable representation learning in both research and real-world settings.
Problem

Research questions and friction points this paper is trying to address.

Limited multimodal embedding support for videos and visual documents
Need for unified framework across diverse visual forms
Improving performance on video and document retrieval tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for diverse visual embeddings
Comprehensive benchmark with new task types
General-purpose model supporting multiple inputs
🔎 Similar Papers
No similar papers found.