DREAM: Improving Video-Text Retrieval Through Relevance-Based Augmentation Using Large Foundation Models

📅 2024-04-07
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Video-text retrieval suffers from limited cross-modal representation generalization due to low-quality and small-scale annotated training data. To address this, we propose DREAM, a relevance-enhanced learning paradigm featuring the first relevance-driven joint augmentation mechanism: (1) self-similarity-guided frame perturbation to enhance video robustness; (2) large language model (LLM)-based semantic-preserving text rewriting; and (3) vision generative model (VGM)-enabled semantic-consistent video stylization. The synthesized high-quality, multi-granularity semantic signals are integrated into a contrastive learning framework. Extensive experiments demonstrate that DREAM achieves significant improvements over state-of-the-art methods on major benchmarks—including MSR-VTT, DiDeMo, and ActivityNet—validating the critical role of semantic relevance–guided data augmentation in enhancing cross-modal representation generalization.

Technology Category

Application Category

📝 Abstract
Recent progress in video-text retrieval has been driven largely by advancements in model architectures and training strategies. However, the representation learning capabilities of videotext retrieval models remain constrained by lowquality and limited training data annotations. To address this issue, we present a novel ViDeoText Retrieval Paradigm with RElevance-based AugMentation, namely DREAM, which enhances video and text data using large foundation models to learn more generalized features. Specifically, we first adopt a simple augmentation method, which generates self-similar data by randomly duplicating or dropping subwords and frames. In addition, inspired by the recent advancement in visual and language generative models, we propose a more robust augmentation method through textual paraphrasing and video stylization using large language models (LLMs) and visual generative models (VGMs). To further enrich video and text information, we propose a relevance-based augmentation method, where LLMs and VGMs generate and integrate new relevant information into the original data. Leveraging this enriched data, extensive experiments on several video-text retrieval benchmarks demonstrate the superiority of DREAM over existing methods.
Problem

Research questions and friction points this paper is trying to address.

Enhance video-text retrieval with relevance-based augmentation
Address low-quality training data in video-text models
Utilize large foundation models for generalized feature learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relevance-based augmentation method
Large language models integration
Visual generative models enhancement
🔎 Similar Papers
Yimu Wang
Yimu Wang
University of Waterloo
Multi-modal Learning
S
Shuai Yuan
Duke University
Xiangru Jian
Xiangru Jian
University of Waterloo
MultimodalityLLMGNNDatabase
W
Wei Pang
University of Waterloo
M
Mushi Wang
University of Waterloo
N
Ning Yu
Netflix