🤖 AI Summary
This paper addresses video temporal grounding—the task of localizing the start and end timestamps of a natural language query within a video. We propose ED-VTG, a two-stage approach built upon multimodal large language models (MLLMs). In the first stage, query enrichment combined with dynamic multi-instance learning selects the optimal semantically enhanced query to mitigate LLM hallucination. In the second stage, a lightweight decoder regresses precise temporal boundaries by fusing contextualized video–text joint representations. ED-VTG achieves state-of-the-art performance on multiple standard benchmarks—including QVHighlights, TVR, and TACoS—significantly outperforming existing LLM-based methods. Moreover, it demonstrates strong generalization in zero-shot transfer settings. The method advances robustness against hallucination while maintaining computational efficiency and cross-dataset adaptability.
📝 Abstract
We introduce ED-VTG, a method for fine-grained video temporal grounding utilizing multi-modal large language models. Our approach harnesses the capabilities of multimodal LLMs to jointly process text and video, in order to effectively localize natural language queries in videos through a two-stage process. Rather than being directly grounded, language queries are initially transformed into enriched sentences that incorporate missing details and cues to aid in grounding. In the second stage, these enriched queries are grounded, using a lightweight decoder, which specializes at predicting accurate boundaries conditioned on contextualized representations of the enriched queries. To mitigate noise and reduce the impact of hallucinations, our model is trained with a multiple-instance-learning objective that dynamically selects the optimal version of the query for each training sample. We demonstrate state-of-the-art results across various benchmarks in temporal video grounding and paragraph grounding settings. Experiments reveal that our method significantly outperforms all previously proposed LLM-based temporal grounding approaches and is either superior or comparable to specialized models, while maintaining a clear advantage against them in zero-shot evaluation scenarios.