TRACE: Temporal Grounding Video LLM via Causal Event Modeling

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Existing video large language models (V-LLMs) for video temporal grounding (VTG) rely solely on natural language generation, neglecting the intrinsic temporal structure of videos and thus suffering from limited modeling capacity. To address this, we propose a causal event modeling framework that explicitly structures V-LLM outputs into timestamped, saliency-scored, and text-described event sequences, enabling autoregressive prediction conditioned on historical events, frame-level features, and instructions. We introduce TRACE—a task-interleaved V-LLM featuring a novel multi-encoder–multi-decoder architecture that jointly and disentangledly models visual, temporal, saliency, and textual signals. TRACE further incorporates task-token interleaving and joint frame-temporal signal encoding. Our method achieves significant improvements over state-of-the-art methods across multiple VTG benchmarks, supports zero-shot generalization, and is fully open-sourced with code and models.

Technology Category

Application Category

📝 Abstract
Video Temporal Grounding (VTG) is a crucial capability for video understanding models and plays a vital role in downstream tasks such as video browsing and editing. To effectively handle various tasks simultaneously and enable zero-shot prediction, there is a growing trend in employing video LLMs for VTG tasks. However, current video LLM-based methods rely exclusively on natural language generation, lacking the ability to model the clear structure inherent in videos, which restricts their effectiveness in tackling VTG tasks. To address this issue, this paper first formally introduces causal event modeling framework, which represents video LLM outputs as sequences of events, and predict the current event using previous events, video inputs, and textural instructions. Each event consists of three components: timestamps, salient scores, and textual captions. We then propose a novel task-interleaved video LLM called TRACE to effectively implement the causal event modeling framework in practice. The TRACE process visual frames, timestamps, salient scores, and text as distinct tasks, employing various encoders and decoding heads for each. Task tokens are arranged in an interleaved sequence according to the causal event modeling framework's formulation. Extensive experiments on various VTG tasks and datasets demonstrate the superior performance of TRACE compared to state-of-the-art video LLMs. Our model and code are available at https://github.com/gyxxyg/TRACE.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in video temporal grounding (VTG) tasks.
Introduces causal event modeling for structured video understanding.
Proposes TRACE, a task-interleaved video LLM for improved VTG performance.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal event modeling for video understanding
Task-interleaved video LLM named TRACE
Multi-task processing with distinct encoders
🔎 Similar Papers
Y
Yongxin Guo
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong, 518172, P.R. China; Tencent PCG
J
Jingyu Liu
Tencent PCG
M
Mingda Li
Tencent PCG
X
Xiaoying Tang
School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, Guangdong, 518172, P.R. China; The Shenzhen Institute of Artificial Intelligence and Robotics for Society; The Guangdong Provincial Key Laboratory of Future Networks of Intelligence
Q
Qingbin Liu
Tencent PCG
X
Xi Chen
Tencent PCG