🤖 AI Summary
Existing large language models (LLMs) face fundamental limitations in long-video understanding due to constrained context windows and inadequate multimodal fusion—particularly for fine-grained frame-level details and audio-modal information. To address this, we propose a Temporal Dynamic Context (TDC) compression mechanism and a training-free chained reasoning framework. Our approach integrates semantic scene segmentation, audio-visual joint encoding, and query-based Transformer-based temporal aggregation to jointly achieve frame-level fidelity and inter-segment progressive reasoning. Crucially, we introduce a dual-path token input strategy—combining static and dynamic tokens—enabling LLMs to simultaneously overcome modality and sequence-length bottlenecks. Evaluated on comprehensive video understanding and audio-visual benchmarks, our method significantly surpasses state-of-the-art approaches, enabling end-to-end inference on ultra-long videos. The code and models are publicly released.
📝 Abstract
Recent advances in Large Language Models (LLMs) have led to significant breakthroughs in video understanding. However, existing models still struggle with long video processing due to the context length constraint of LLMs and the vast amount of information within the video. Although some recent methods are designed for long video understanding, they often lose crucial information during token compression and struggle with additional modality like audio. In this work, we propose a dynamic long video encoding method utilizing the temporal relationship between frames, named Temporal Dynamic Context (TDC). Firstly, we segment the video into semantically consistent scenes based on inter-frame similarities, then encode each frame into tokens using visual-audio encoders. Secondly, we propose a novel temporal context compressor to reduce the number of tokens within each segment. Specifically, we employ a query-based Transformer to aggregate video, audio, and instruction text tokens into a limited set of temporal context tokens. Finally, we feed the static frame tokens and the temporal context tokens into the LLM for video understanding. Furthermore, to handle extremely long videos, we propose a training-free chain-of-thought strategy that progressively extracts answers from multiple video segments. These intermediate answers serve as part of the reasoning process and contribute to the final answer. We conduct extensive experiments on general video understanding and audio-video understanding benchmarks, where our method demonstrates strong performance. The code and models are available at https://github.com/Hoar012/TDC-Video.