π€ AI Summary
This work addresses the underperformance of existing multimodal large language model (MLLM)-based video embedding approaches compared to specialized video foundation models in videoβtext retrieval tasks. Through a systematic analysis of intermediate-layer features in MLLMs, the authors reveal that rich video semantics are already embedded within these representations. They propose a lightweight, vision-supervision-free training paradigm that leverages only text-summary alignment for embedding learning, combined with a calibrated MLLM head to enable zero-shot retrieval. The method significantly outperforms current state-of-the-art approaches across multiple standard video retrieval benchmarks, without requiring any visual-domain fine-tuning. These results demonstrate the effectiveness and superiority of purely text-alignment-driven video embedding learning.
π Abstract
Recent studies have adapted generative Multimodal Large Language Models (MLLMs) into embedding extractors for vision tasks, typically through fine-tuning to produce universal representations. However, their performance on video remains inferior to Video Foundation Models (VFMs). In this paper, we focus on leveraging MLLMs for video-text embedding and retrieval. We first conduct a systematic layer-wise analysis, showing that intermediate (pre-trained) MLLM layers already encode substantial task-relevant information. Leveraging this insight, we demonstrate that combining intermediate-layer embeddings with a calibrated MLLM head yields strong zero-shot retrieval performance without any training. Building on these findings, we introduce a lightweight text-based alignment strategy which maps dense video captions to short summaries and enables task-related video-text embedding learning without visual supervision. Remarkably, without any fine-tuning beyond text, our method outperforms current methods, often by a substantial margin, achieving state-of-the-art results across common video retrieval benchmarks.