🤖 AI Summary
Video-text retrieval suffers from limited cross-modal representation generalization due to low-quality and small-scale annotated training data. To address this, we propose DREAM, a relevance-enhanced learning paradigm featuring the first relevance-driven joint augmentation mechanism: (1) self-similarity-guided frame perturbation to enhance video robustness; (2) large language model (LLM)-based semantic-preserving text rewriting; and (3) vision generative model (VGM)-enabled semantic-consistent video stylization. The synthesized high-quality, multi-granularity semantic signals are integrated into a contrastive learning framework. Extensive experiments demonstrate that DREAM achieves significant improvements over state-of-the-art methods on major benchmarks—including MSR-VTT, DiDeMo, and ActivityNet—validating the critical role of semantic relevance–guided data augmentation in enhancing cross-modal representation generalization.
📝 Abstract
Recent progress in video-text retrieval has been driven largely by advancements in model architectures and training strategies. However, the representation learning capabilities of videotext retrieval models remain constrained by lowquality and limited training data annotations. To address this issue, we present a novel ViDeoText Retrieval Paradigm with RElevance-based AugMentation, namely DREAM, which enhances video and text data using large foundation models to learn more generalized features. Specifically, we first adopt a simple augmentation method, which generates self-similar data by randomly duplicating or dropping subwords and frames. In addition, inspired by the recent advancement in visual and language generative models, we propose a more robust augmentation method through textual paraphrasing and video stylization using large language models (LLMs) and visual generative models (VGMs). To further enrich video and text information, we propose a relevance-based augmentation method, where LLMs and VGMs generate and integrate new relevant information into the original data. Leveraging this enriched data, extensive experiments on several video-text retrieval benchmarks demonstrate the superiority of DREAM over existing methods.