🤖 AI Summary
To address the challenges of high visual diversity among relevant videos and suboptimal cross-modal retrieval—particularly the omission of discriminative content—in Ad-hoc Video Search (AVS), this paper proposes a Multi-Space Decorrelated Cross-Modal Alignment framework. Methodologically, it (1) constructs multiple feature-specific, decorrelated shared embedding spaces—instead of a single fused space; (2) introduces a decorrelation loss to enhance ranking divergence across spaces for negative samples; and (3) integrates multi-space triplet ranking loss with an entropy-based fair fusion mechanism to achieve diversity-aware video–text alignment. Evaluated on the full TRECVID AVS 2016–2023 benchmark suite, the method significantly improves both retrieval coverage and result diversity. Visualization analysis confirms explicit complementarity and semantic specialization across the learned embedding spaces.
📝 Abstract
Ad-hoc Video Search (AVS) involves using a textual query to search for multiple relevant videos in a large collection of unlabeled short videos. The main challenge of AVS is the visual diversity of relevant videos. A simple query such as "Find shots of a man and a woman dancing together indoors" can span a multitude of environments, from brightly lit halls and shadowy bars to dance scenes in black-and-white animations. It is therefore essential to retrieve relevant videos as comprehensively as possible. Current solutions for the AVS task primarily fuse multiple features into one or more common spaces, yet overlook the need for diverse spaces. To fully exploit the expressive capability of individual features, we propose LPD, short for Learning Partially Decorrelated common spaces. LPD incorporates two key innovations: feature-specific common space construction and the de-correlation loss. Specifically, LPD learns a separate common space for each video and text feature, and employs de-correlation loss to diversify the ordering of negative samples across different spaces. To enhance the consistency of multi-space convergence, we designed an entropy-based fair multi-space triplet ranking loss. Extensive experiments on the TRECVID AVS benchmarks (2016-2023) justify the effectiveness of LPD. Moreover, diversity visualizations of LPD's spaces highlight its ability to enhance result diversity.