🤖 AI Summary
To address the inaccurate localization of short-duration moments in video moment retrieval using DETR-based models, this paper proposes MomentMix data augmentation (comprising ForegroundMix and BackgroundMix) and a length-aware decoder. We introduce, for the first time, a query-length-conditioned bipartite matching mechanism to enhance feature diversity for short moments and rectify systematic biases in center-position prediction. The method enables length-adaptive modeling and matching optimization within the DETR framework. Experiments demonstrate state-of-the-art performance: on QVHighlights, our approach achieves new SOTA results in both R@1 and mAP; on TACoS and Charades-STA, R@1@0.7 improves by 2.46% and 2.57%, respectively, while average mAP significantly surpasses existing methods.
📝 Abstract
Video Moment Retrieval (MR) aims to localize moments within a video based on a given natural language query. Given the prevalent use of platforms like YouTube for information retrieval, the demand for MR techniques is significantly growing. Recent DETR-based models have made notable advances in performance but still struggle with accurately localizing short moments. Through data analysis, we identified limited feature diversity in short moments, which motivated the development of MomentMix. MomentMix employs two augmentation strategies: ForegroundMix and BackgroundMix, each enhancing the feature representations of the foreground and background, respectively. Additionally, our analysis of prediction bias revealed that short moments particularly struggle with accurately predicting their center positions of moments. To address this, we propose a Length-Aware Decoder, which conditions length through a novel bipartite matching process. Our extensive studies demonstrate the efficacy of our length-aware approach, especially in localizing short moments, leading to improved overall performance. Our method surpasses state-of-the-art DETR-based methods on benchmark datasets, achieving the highest R1 and mAP on QVHighlights and the highest R1@0.7 on TACoS and Charades-STA (such as a 2.46% gain in R1@0.7 and a 2.57% gain in mAP average for QVHighlights). The code is available at https://github.com/sjpark5800/LA-DETR.