🤖 AI Summary
This work addresses the Temporal Video Grounding (TVG) task—localizing temporal segments in long videos based on natural language queries. We propose TimeZero, a reasoning-guided multimodal large language model (LVLM). Methodologically, we introduce the first pure reinforcement learning (PPO)-driven LVLM inference paradigm for TVG, eliminating the need for intermediate step annotations and enabling zero-supervision, end-to-end video–language relational modeling. To enhance robustness for fine-grained localization in long videos, we decouple spatiotemporal understanding from language alignment and integrate a video frame feature pyramid with cross-modal attention. On the Charades-STA benchmark, TimeZero achieves state-of-the-art performance, significantly outperforming existing fully supervised and weakly supervised approaches. The code is publicly available.
📝 Abstract
We introduce TimeZero, a reasoning-guided LVLM designed for the temporal video grounding (TVG) task. This task requires precisely localizing relevant video segments within long videos based on a given language query. TimeZero tackles this challenge by extending the inference process, enabling the model to reason about video-language relationships solely through reinforcement learning. To evaluate the effectiveness of TimeZero, we conduct experiments on two benchmarks, where TimeZero achieves state-of-the-art performance on Charades-STA. Code is available at https://github.com/www-Ye/TimeZero.