🤖 AI Summary
This paper addresses the substantial semantic gap and difficulty of fine-grained alignment between text and video modalities in text-video retrieval. To this end, we propose a syntax-driven dual-path cross-modal modeling framework. Our approach is the first to explicitly model the hierarchical dependency structure of natural language syntax, establishing a dual-path guidance mechanism: (1) syntax-guided spatiotemporal attention, which dynamically focuses on salient spatiotemporal regions in videos to enable fine-grained visual representation alignment; and (2) hierarchical contrastive learning loss, which supports similarity measurement grounded in syntactic-level alignment. The method achieves state-of-the-art performance on four major benchmarks—MSR-VTT, MSVD, DiDeMo, and ActivityNet—with an average improvement of 3.2% in R@1. Ablation studies confirm that explicit syntactic hierarchy modeling constitutes the primary driver of performance gain.
📝 Abstract
The user base of short video apps has experienced unprecedented growth in recent years, resulting in a significant demand for video content analysis. In particular, text-video retrieval, which aims to find the top matching videos given text descriptions from a vast video corpus, is an essential function, the primary challenge of which is to bridge the modality gap. Nevertheless, most existing approaches treat texts merely as discrete tokens and neglect their syntax structures. Moreover, the abundant spatial and temporal clues in videos are often underutilized due to the lack of interaction with text. To address these issues, we argue that using texts as guidance to focus on relevant temporal frames and spatial regions within videos is beneficial. In this paper, we propose a novel Syntax-Hierarchy-Enhanced text-video retrieval method (SHE-Net) that exploits the inherent semantic and syntax hierarchy of texts to bridge the modality gap from two perspectives. First, to facilitate a more fine-grained integration of visual content, we employ the text syntax hierarchy, which reveals the grammatical structure of text descriptions, to guide the visual representations. Second, to further enhance the multi-modal interaction and alignment, we also utilize the syntax hierarchy to guide the similarity calculation. We evaluated our method on four public text-video retrieval datasets of MSR-VTT, MSVD, DiDeMo, and ActivityNet. The experimental results and ablation studies confirm the advantages of our proposed method.