🤖 AI Summary
Text-to-video (T2V) retrieval suffers from the cross-modal semantic gap, limiting the effectiveness of conventional alignment methods. This paper presents a systematic survey of 81 works that leverage auxiliary information—including visual attributes, spatiotemporal context, speech, and rephrased text—to enhance T2V retrieval. We introduce, for the first time, a taxonomy of auxiliary information, a unified framework for multimodal fusion paradigms, and a standardized evaluation benchmark. Methodologically, we propose two novel directions: fine-grained cross-modal alignment and dynamic contextual modeling, integrating key techniques such as multimodal alignment, vision-language pretraining, attribute-decoupled representation learning, temporal attention, and cross-modal knowledge distillation. We conduct comprehensive evaluations under a consistent protocol across MSR-VTT, ActivityNet, and YouCook2, establishing the current state-of-the-art performance boundaries. Additionally, we publicly release an auxiliary information annotation guideline and a meta-analysis dataset to support future research.
📝 Abstract
Text-to-Video (T2V) retrieval aims to identify the most relevant item from a gallery of videos based on a user's text query. Traditional methods rely solely on aligning video and text modalities to compute the similarity and retrieve relevant items. However, recent advancements emphasise incorporating auxiliary information extracted from video and text modalities to improve retrieval performance and bridge the semantic gap between these modalities. Auxiliary information can include visual attributes, such as objects; temporal and spatial context; and textual descriptions, such as speech and rephrased captions. This survey comprehensively reviews 81 research papers on Text-to-Video retrieval that utilise such auxiliary information. It provides a detailed analysis of their methodologies; highlights state-of-the-art results on benchmark datasets; and discusses available datasets and their auxiliary information. Additionally, it proposes promising directions for future research, focusing on different ways to further enhance retrieval performance using this information.