🤖 AI Summary
To address the challenge of long-video understanding in multimodal large language models (MLLMs), this paper proposes TimeSuite—a framework that enhances the spatiotemporal modeling capability of short-video-pretrained MLLMs for long videos through three key innovations: (1) temporal-aware visual representations coupled with TAPE (Temporal Adaptive Positional Encoding); (2) a Token Shuffling compression mechanism for efficient long-video token reduction; and (3) the novel Temporal Grounded Captioning task and grounded instruction fine-tuning paradigm, supported by TimePro—the first large-scale long-video instruction dataset (349K samples, 9 tasks). Evaluated on EgoSchema and VideoMME, TimeSuite achieves +5.6% and +6.8% absolute improvements, respectively. VideoChat-T—our instantiated model—demonstrates strong zero-shot temporal grounding capability and, after fine-tuning, matches the performance of supervised expert models.
📝 Abstract
Multimodal Large Language Models (MLLMs) have demonstrated impressive performance in short video understanding. However, understanding long-form videos still remains challenging for MLLMs. This paper proposes TimeSuite, a collection of new designs to adapt the existing short-form video MLLMs for long video understanding, including a simple yet efficient framework to process long video sequence, a high-quality video dataset for grounded tuning of MLLMs, and a carefully-designed instruction tuning task to explicitly incorporate the grounding supervision in the traditional QA format. Specifically, based on VideoChat, we propose our long-video MLLM, coined as VideoChat-T, by implementing a token shuffling to compress long video tokens and introducing Temporal Adaptive Position Encoding (TAPE) to enhance the temporal awareness of visual representation. Meanwhile, we introduce the TimePro, a comprehensive grounding-centric instruction tuning dataset composed of 9 tasks and 349k high-quality grounded annotations. Notably, we design a new instruction tuning task type, called Temporal Grounded Caption, to peform detailed video descriptions with the corresponding time stamps prediction. This explicit temporal location prediction will guide MLLM to correctly attend on the visual content when generating description, and thus reduce the hallucination risk caused by the LLMs. Experimental results demonstrate that our TimeSuite provides a successful solution to enhance the long video understanding capability of short-form MLLM, achieving improvement of 5.6% and 6.8% on the benchmarks of Egoschema and VideoMME, respectively. In addition, VideoChat-T exhibits robust zero-shot temporal grounding capabilities, significantly outperforming the existing state-of-the-art MLLMs. After fine-tuning, it performs on par with the traditional supervised expert models.