Moment of Untruth: Dealing with Negative Queries in Video Moment Retrieval

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video moment retrieval (VMR) methods operate under an implicit “query-must-exist” assumption, rendering them incapable of rejecting irrelevant queries—leading to erroneous moment localization. To address this limitation, we propose Negative-Aware Video Moment Retrieval (NA-VMR), a new task requiring joint positive moment localization and negative query rejection—specifically distinguishing in-domain from out-of-domain negatives. To support this task, we introduce the first NA-VMR benchmark, featuring formally defined dual-type negative samples, a newly curated data split, and a standardized evaluation protocol. We further propose UniVTG-NA, an extension of UniVTG that integrates a dedicated negative-query discrimination head and a contrastive negative sampling mechanism. Experiments demonstrate that UniVTG-NA achieves a 98.4% average negative query rejection rate while incurring only a 3.87% drop in Recall@1—significantly outperforming state-of-the-art methods. Our code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Video Moment Retrieval is a common task to evaluate the performance of visual-language models - it involves localising start and end times of moments in videos from query sentences. The current task formulation assumes that the queried moment is present in the video, resulting in false positive moment predictions when irrelevant query sentences are provided. In this paper we propose the task of Negative-Aware Video Moment Retrieval (NA-VMR), which considers both moment retrieval accuracy and negative query rejection accuracy. We make the distinction between In-Domain and Out-of-Domain negative queries and provide new evaluation benchmarks for two popular video moment retrieval datasets: QVHighlights and Charades-STA. We analyse the ability of current SOTA video moment retrieval approaches to adapt to Negative-Aware Video Moment Retrieval and propose UniVTG-NA, an adaptation of UniVTG designed to tackle NA-VMR. UniVTG-NA achieves high negative rejection accuracy (avg. $98.4%$) scores while retaining moment retrieval scores to within $3.87%$ Recall@1. Dataset splits and code are available at https://github.com/keflanagan/MomentofUntruth
Problem

Research questions and friction points this paper is trying to address.

Handling negative queries in video retrieval
Improving negative query rejection accuracy
Adapting models for negative-aware video moment retrieval
Innovation

Methods, ideas, or system contributions that make the work stand out.

Negative-Aware Video Moment Retrieval
UniVTG adaptation for NA-VMR
High negative rejection accuracy
🔎 Similar Papers
No similar papers found.