Expertized Caption Auto-Enhancement for Video-Text Retrieval

📅 2025-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the modality gap in video–text retrieval caused by coarse-grained original captions, this paper proposes a fully data-driven automatic caption enhancement method. First, it introduces a prompt-free self-supervised caption enhancement framework powered by large language models to generate high-quality, semantically rich captions. Second, it designs a video-aware expert caption selection module that dynamically identifies the most contextually aligned, fine-grained descriptions. Third, it employs an end-to-end differentiable training strategy to jointly optimize cross-modal retrieval performance. The approach eliminates reliance on manual prompt engineering, handcrafted lexicons, or large-scale annotated data, enabling personalized and adaptive cross-modal alignment. Extensive experiments demonstrate state-of-the-art performance: top-1 recall scores of 68.5% on MSR-VTT, 68.1% on MSVD, and 62.0% on DiDeMo—substantially surpassing existing methods.

Technology Category

Application Category

📝 Abstract
The burgeoning field of video-text retrieval has witnessed significant advancements with the advent of deep learning. However, the challenge of matching text and video persists due to inadequate textual descriptions of videos. The substantial information gap between the two modalities hinders a comprehensive understanding of videos, resulting in ambiguous retrieval results. While rewriting methods based on large language models have been proposed to broaden text expressions, carefully crafted prompts are essential to ensure the reasonableness and completeness of the rewritten texts. This paper proposes an automatic caption enhancement method that enhances expression quality and mitigates empiricism in augmented captions through self-learning. Additionally, an expertized caption selection mechanism is designed and introduced to customize augmented captions for each video, facilitating video-text matching. Our method is entirely data-driven, which not only dispenses with heavy data collection and computation workload but also improves self-adaptability by circumventing lexicon dependence and introducing personalized matching. The superiority of our method is validated by state-of-the-art results on various benchmarks, specifically achieving Top-1 recall accuracy of 68.5% on MSR-VTT, 68.1% on MSVD, and 62.0% on DiDeMo.
Problem

Research questions and friction points this paper is trying to address.

Enhances video-text retrieval accuracy
Automates caption enhancement with self-learning
Customizes captions for personalized video matching
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automatic caption enhancement method
Expertized caption selection mechanism
Data-driven without heavy computation
🔎 Similar Papers
No similar papers found.